Fortuitous Forgetting in Connectionist Networks

Overview

Fortuitous Forgetting in Connectionist Networks

Introduction

This repository includes reference code for the paper Fortuitous Forgetting in Connectionist Networks (ICLR 2022).

@inproceedings{
  zhou2022fortuitous,
  title={Fortuitous Forgetting in Connectionist Networks},
  author={Hattie Zhou and Ankit Vani and Hugo Larochelle and Aaron Courville},
  booktitle={International Conference on Learning Representations},
  year={2022},
  url={https://openreview.net/forum?id=ei3SY1_zYsE}
}

Targeted Forgetting

This code implements the experiments on partial weight perturbations and their effects on easy or hard examples. Scripts are stored in /targeted_forgetting.

To run KE-style forgetting:

python mixed_group_training.py --seed 1 --train_perc 0.1 --random_perc 0.1 --keep_perc 0.5 --train_iters 50000 --fname new_rand_reinit_train0.1_mislabel0.1 --no_wandb

To run IMP-style forgetting:

python mixed_group_training.py --seed 1 --train_perc 1 --random_perc 0.0 --keep_perc 0.3 --train_iters 50000 --weight_mask --reset_to_zero --rewind_to_init --margin_groups --fname new_weight_rewind_zero_train1_margin0.1 --no_wandb

Later Layer Forgetting

This code builds upon the repository for Knowledge Evolution in Neural Networks. Scripts are stored in /llf_ke.

To run 10 generations of LLF on the Flower102 dataset:

python train_KE_cls.py --epochs 200 --num_generations 11 --name resetlayer4_flower_resnet18 --weight_decay 0.0001 --arch Split_ResNet18 --reset_layer_name layer4 --set Flower102 --data $DATA_DIR --no_wandb

To run 10 generations of KE:

python train_KE_cls.py --epochs 200 --num_generations 11 --name ke_kels_flower_resnet18 --weight_decay 0.0001 --arch Split_ResNet18 --split_rate 0.8 --split_mode kels --set Flower102 --data $DATA_DIR --no_wandb

To run 10 generations-equivalent of the long baseline on the Flower102 dataset:

python train_KE_cls.py --epochs 2200 --num_generations 1 --name resetlayer4_flower_resnet18_long2200 --weight_decay 0.0001 --arch Split_ResNet18 --reset_layer_name layer4 --set Flower102 --eval_intermediate_tst 200 --data $DATA_DIR --no_wandb

To run freeze later layers experiment:

python train_KE_cls.py --epochs 200 --num_generations 11 --name resetlayer4_flower_resnet18_freeze_reset_layers --weight_decay 0.0001 --arch Split_ResNet18 --reset_layer_name layer4 --data $DATA_DIR --set Flower102 --reverse_freeze --freeze_non_reset --optimizer sgd_TEMP --no_wandb

To run freeze early layers experiment:

python train_KE_cls.py --epochs 200 --num_generations 11 --name resetlayer4_flower_resnet18_freeze_nonreset_layers --weight_decay 0.0001 --arch Split_ResNet18 --reset_layer_name layer4 --data $DATA_DIR --set Flower102 --freeze_non_reset --optimizer sgd_TEMP --no_wandb

To run freeze later layers with fixed seed experiment:

python train_KE_cls.py --epochs 200 --num_generations 11 --name resetlayer4_flower_resnet18_freeze_reset_layers --weight_decay 0.0001 --arch Split_ResNet18 --reset_layer_name layer4 --data $DATA_DIR --set Flower102 --reverse_freeze --freeze_non_reset --optimizer sgd_TEMP --seed 0 --fix_seed --no_wandb

Ease-of-teaching

This code builds upon the repository for Ease-of-Teaching and Language Structure from Emergent Communication. Scripts are stored in /ease_of_teaching.

To run the no reset baseline:

python forget_train.py --fname baseline_no_reset --seed 0 --no_wandb

To run the reset receiver baseline:

python forget_train.py --resetNum 50 --fname baseline_reset_receiver --seed 0 --reset_receiver --no_wandb

To run partial balanced forgetting (PBF):

python forget_train.py --resetNum 100 --fname same_weight_reinit_sender10_receiver10_reset100 --seed 0 --forget_sender --sender_keep_perc 0.1 --forget_receiver --receiver_keep_perc 0.1 --weight_mask --same_mask --no_wandb

To run targeted forgettine experiments:

python mixed_language_forget_samebatch.py --group_vars same_mask weight_mask reset_to_zero keep_perc seed trainIters train_with_reset reset_every --seed 0 --keep_perc 0.5 --fname new_rand_reinit

python mixed_language_forget_samebatch.py --group_vars same_mask weight_mask reset_to_zero keep_perc seed trainIters train_with_reset reset_every --seed 0 --keep_perc 0.5 --fname same_weight_zero --same_mask --weight_mask --reset_to_zero

Owner
Hattie Zhou
Hattie Zhou
👨‍💻 run nanosaur in simulation with Gazebo/Ingnition

🦕 👨‍💻 nanosaur_gazebo nanosaur The smallest NVIDIA Jetson dinosaur robot, open-source, fully 3D printable, based on ROS2 & Isaac ROS. Designed & ma

nanosaur 9 Jul 19, 2022
Namish Khanna 40 Oct 11, 2022
MIMO-UNet - Official Pytorch Implementation

MIMO-UNet - Official Pytorch Implementation This repository provides the official PyTorch implementation of the following paper: Rethinking Coarse-to-

Sungjin Cho 248 Jan 02, 2023
A pytorch implementation of Pytorch-Sketch-RNN

Pytorch-Sketch-RNN A pytorch implementation of https://arxiv.org/abs/1704.03477 In order to draw other things than cats, you will find more drawing da

Alexis David Jacq 172 Dec 12, 2022
A Graph Neural Network Tool for Recovering Dense Sub-graphs in Random Dense Graphs.

PYGON A Graph Neural Network Tool for Recovering Dense Sub-graphs in Random Dense Graphs. Installation This code requires to install and run the graph

Yoram Louzoun's Lab 0 Jun 25, 2021
This program can detect your face and add an Christams hat on the top of your head

Auto_Christmas This program can detect your face and add a Christmas hat to the top of your head. just run the Auto_Christmas.py, then you can see the

3 Dec 22, 2021
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Jake YANG 62 Nov 21, 2022
Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

Phil Wang 189 Nov 22, 2022
PyTorch implementation of the Value Iteration Networks (VIN) (NIPS '16 best paper)

Value Iteration Networks in PyTorch Tamar, A., Wu, Y., Thomas, G., Levine, S., and Abbeel, P. Value Iteration Networks. Neural Information Processing

LEI TAI 75 Nov 24, 2022
Stroke-predictions-ml-model - Machine learning model to predict individuals chances of having a stroke

stroke-predictions-ml-model machine learning model to predict individuals chance

Alex Volchek 1 Jan 03, 2022
Official implementation for: Blended Diffusion for Text-driven Editing of Natural Images.

Blended Diffusion for Text-driven Editing of Natural Images Blended Diffusion for Text-driven Editing of Natural Images Omri Avrahami, Dani Lischinski

328 Dec 30, 2022
Causal Influence Detection for Improving Efficiency in Reinforcement Learning

Causal Influence Detection for Improving Efficiency in Reinforcement Learning This repository contains the code release for the paper "Causal Influenc

Autonomous Learning Group 21 Nov 29, 2022
PyTorch Connectomics: segmentation toolbox for EM connectomics

Introduction The field of connectomics aims to reconstruct the wiring diagram of the brain by mapping the neural connections at the level of individua

Zudi Lin 132 Dec 26, 2022
Discovering Explanatory Sentences in Legal Case Decisions Using Pre-trained Language Models.

Statutory Interpretation Data Set This repository contains the data set created for the following research papers: Savelka, Jaromir, and Kevin D. Ashl

17 Dec 23, 2022
PyTorch implementation of Spiking Neural Networks trained on surrogate gradient & BPTT using snntorch.

snn-localization repo PyTorch implementation of Spiking Neural Networks trained on surrogate gradient & BPTT using snntorch. Install Dependencies Orig

Sami BARCHID 1 Jan 06, 2022
The code used for the free [email protected] Webinar series on Reinforcement Learning in Finance

Reinforcement Learning in Finance [email protected] Webinar This repository provides the code f

Yves Hilpisch 62 Dec 22, 2022
Cycle Consistent Adversarial Domain Adaptation (CyCADA)

Cycle Consistent Adversarial Domain Adaptation (CyCADA) A pytorch implementation of CyCADA. If you use this code in your research please consider citi

Hyunwoo Ko 2 Jan 10, 2022
Cross-Modal Contrastive Learning for Text-to-Image Generation

Cross-Modal Contrastive Learning for Text-to-Image Generation This repository hosts the open source JAX implementation of XMC-GAN. Setup instructions

Google Research 94 Nov 12, 2022
Bayesian optimisation library developped by Huawei Noah's Ark Library

Bayesian Optimisation Research This directory contains official implementations for Bayesian optimisation works developped by Huawei R&D, Noah's Ark L

HUAWEI Noah's Ark Lab 395 Dec 30, 2022
functorch is a prototype of JAX-like composable function transforms for PyTorch.

functorch is a prototype of JAX-like composable function transforms for PyTorch.

Facebook Research 1.2k Jan 09, 2023