Code of the paper "Multi-Task Meta-Learning Modification with Stochastic Approximation".

Overview

Multi-Task Meta-Learning Modification with Stochastic Approximation

This repository contains the code for the paper
"Multi-Task Meta-Learning Modification with Stochastic Approximation".

Method pipeline

Dependencies

This code has been tested on Ubuntu 16.04 with Python 3.8 and PyTorch 1.8.

To install the required dependencies:

pip install -r requirements.txt

Usage

To reproduce the results on benchmarks described in our article, use the following scripts. To vary types of the experiments, change the parameters of the scripts responsible for benchmark dataset, shot and way (e.g. miniImageNet 1-shot 5-way or CIFAR-FS 5-shot 2-way).

MAML

Multi-task modification (MTM) for Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017).

Multi-task modifications for MAML are trained on top of baseline MAML model which has to be trained beforehand.

To train MAML (reproduced) on miniImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-miniimagenet \
    --dataset miniimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --num-epochs 300 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM SPSA-Track on miniImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name mini-imagenet-mtm-spsa-track \
    --load "./results/reproduced-miniimagenet/model.th" \
    --dataset miniimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting spsa-track \
    --normalize-spsa-weights-after 100 \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML (reproduced) on tieredImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-tieredimagenet \
    --dataset tieredimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --num-epochs 300 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM SPSA on tieredImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name tiered-imagenet-mtm-spsa \
    --load "./results/reproduced-tieredimagenet/model.th" \
    --dataset tieredimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting spsa-delta \
    --normalize-spsa-weights-after 100 \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML (reproduced) on FC100 5-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-fc100 \
    --dataset fc100 \
    --num-ways 5 \
    --num-shots 5 \
    --num-steps 5 \
    --num-epochs 300 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM SPSA-Coarse on FC100 5-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name fc100-mtm-spsa-coarse \
    --load "./results/reproduced-fc100/model.th" \
    --dataset fc100 \
    --num-ways 5 \
    --num-shots 5 \
    --num-steps 5 \
    --task-weighting spsa-per-coarse-class \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML (reproduced) on CIFAR-FS 1-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-cifar \
    --dataset cifarfs \
    --num-ways 5 \
    --num-shots 1 \
    --num-steps 5 \
    --num-epochs 600 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM Inner First-Order on CIFAR-FS 1-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name cifar-mtm-inner-first-order \
    --load "./results/reproduced-cifar/model.th" \
    --dataset cifarfs \
    --num-ways 5 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting gradient-novel-loss \
    --use-inner-optimizer \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM Backprop on CIFAR-FS 1-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name cifar-mtm-backprop \
    --load "./results/reproduced-cifar-5shot-5way/model.th" \
    --dataset cifarfs \
    --num-ways 5 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting gradient-novel-loss \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To test any of the above-described benchmarks, run:

python maml/test.py ./results/path-to-config/config.json --num-steps 10 --use-cuda

For instance, to test MAML MTM SPSA-Track on miniImageNet 1-shot 2-way benchmark, run:

python maml/test.py ./results/mini-imagenet-mtm-spsa-track/config.json --num-steps 10 --use-cuda

Prototypical Networks

Multi-task modification (MTM) for Prototypical Networks (ProtoNet) (Snell et al., 2017).

To train ProtoNet MTM SPSA-Track with ResNet-12 backbone on miniImageNet 1-shot 5-way benchmark, run:

python protonet/train.py \
    --dataset miniImageNet \
    --network ResNet12 \
    --tracking \
    --train-shot 1 \
    --train-way 5 \
    --val-shot 1 \
    --val-way 5

To test ProtoNet MTM SPSA-Track with ResNet-12 backbone on miniImageNet 1-shot 5-way benchmark, run:

python protonet/test.py --dataset miniImageNet --network ResNet12 --shot 1 --way 5

To train ProtoNet MTM Backprop with 64-64-64-64 backbone on CIFAR-FS 1-shot 2-way benchmark, run:

python protonet/train.py \
    --dataset CIFAR_FS \
    --train-weights \
    --train-weights-layer \
    --train-shot 1 \
    --train-way 2 \
    --val-shot 1 \
    --val-way 2

To test ProtoNet MTM Backprop with 64-64-64-64 backbone on CIFAR-FS 1-shot 5-way benchmark, run:

python protonet/test.py --dataset CIFAR_FS --shot 1 --way 2

To train ProtoNet MTM Inner First-Order with 64-64-64-64 backbone on FC100 10-shot 5-way benchmark, run:

python protonet/train.py \
    --dataset FC100 \
    --train-weights \
    --train-weights-opt \
    --train-shot 10 \
    --train-way 5 \
    --val-shot 10 \
    --val-way 5

To test ProtoNet MTM Inner First-Order with 64-64-64-64 backbone on FC100 10-shot 5-way benchmark, run:

python protonet/test.py --dataset FC100 --shot 10 --way 5

To train ProtoNet MTM SPSA with 64-64-64-64 backbone on tieredImageNet 5-shot 2-way benchmark, run:

python protonet/train.py \
    --dataset tieredImageNet \
    --train-shot 5 \
    --train-way 2 \
    --val-shot 5 \
    --val-way 2

To test ProtoNet MTM SPSA with 64-64-64-64 backbone on tieredImageNet 5-shot 2-way benchmark, run:

python protonet/test.py --dataset tieredImageNet --shot 5 --way 2

Acknowledgments

Our code uses some dataloaders from Torchmeta.

Code in maml folder is based on the extended implementation from Torchmeta and pytorch-maml. The code has been updated so that baseline scores more closely follow those of the original MAML paper.

Code in protonet folder is based on the implementation from MetaOptNet. All .py files in this folder except for dataloaders.py and optimize.py were adopted from this implementation and modified afterwards. A copy of Apache License, Version 2.0 is available in protonet folder.

Owner
Andrew
Andrew
IGCN : Image-to-graph convolutional network

IGCN : Image-to-graph convolutional network IGCN is a learning framework for 2D/3D deformable model registration and alignment, and shape reconstructi

Megumi Nakao 7 Oct 27, 2022
PyTorch implementation of CloudWalk's recent work DenseBody

densebody_pytorch PyTorch implementation of CloudWalk's recent paper DenseBody. Note: For most recent updates, please check out the dev branch. Update

Lingbo Yang 401 Nov 19, 2022
A lightweight deep network for fast and accurate optical flow estimation.

FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation The official PyTorch implementation of FastFlowNet (ICRA 2021). Authors: Lingtong

Tone 161 Jan 03, 2023
A TensorFlow implementation of the Mnemonic Descent Method.

MDM A Tensorflow implementation of the Mnemonic Descent Method. Mnemonic Descent Method: A recurrent process applied for end-to-end face alignment G.

123 Oct 07, 2022
TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2)

TensorFlow Examples This tutorial was designed for easily diving into TensorFlow, through examples. For readability, it includes both notebooks and so

Aymeric Damien 42.5k Jan 08, 2023
Text Generation by Learning from Demonstrations

Text Generation by Learning from Demonstrations The README was last updated on March 7, 2021. The repo is based on fairseq (v0.9.?). Paper arXiv Prere

38 Oct 21, 2022
PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and reinforcement learning

safe-control-gym Physics-based CartPole and Quadrotor Gym environments (using PyBullet) with symbolic a priori dynamics (using CasADi) for learning-ba

Dynamic Systems Lab 300 Dec 28, 2022
RoboDesk A Multi-Task Reinforcement Learning Benchmark

RoboDesk A Multi-Task Reinforcement Learning Benchmark If you find this open source release useful, please reference in your paper: @misc{kannan2021ro

Google Research 66 Oct 07, 2022
NitroFE is a Python feature engineering engine which provides a variety of modules designed to internally save past dependent values for providing continuous calculation.

NitroFE is a Python feature engineering engine which provides a variety of modules designed to internally save past dependent values for providing continuous calculation.

100 Sep 28, 2022
A Deep Learning Framework for Neural Derivative Hedging

NNHedge NNHedge is a PyTorch based framework for Neural Derivative Hedging. The following repository was implemented to ease the experiments of our pa

GUIJIN SON 17 Nov 14, 2022
Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN", accepted to ACM MM 2021 BNI Track.

RecycleD Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN

Yunan Zhu 23 Nov 05, 2022
DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation

DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation This repository is the implementation of DynaTune paper. This folder

4 Nov 02, 2022
Zen-NAS: A Zero-Shot NAS for High-Performance Deep Image Recognition

Zen-NAS: A Zero-Shot NAS for High-Performance Deep Image Recognition How Fast Compare to Other Zero-Shot NAS Proxies on CIFAR-10/100 Pre-trained Model

190 Dec 29, 2022
An implementation of the proximal policy optimization algorithm

PPO Pytorch C++ This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch. It uses a simple TestEnvironment t

Martin Huber 59 Dec 09, 2022
Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

CaGCN This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration". Paper L

6 Dec 19, 2022
Disentangled Cycle Consistency for Highly-realistic Virtual Try-On, CVPR 2021

Disentangled Cycle Consistency for Highly-realistic Virtual Try-On, CVPR 2021 [WIP] The code for CVPR 2021 paper 'Disentangled Cycle Consistency for H

ChongjianGE 94 Dec 11, 2022
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
Awesome Graph Classification - A collection of important graph embedding, classification and representation learning papers with implementations.

A collection of graph classification methods, covering embedding, deep learning, graph kernel and factorization papers

Benedek Rozemberczki 4.5k Jan 01, 2023
Code of PVTv2 is released! PVTv2 largely improves PVTv1 and works better than Swin Transformer with ImageNet-1K pre-training.

Updates (2020/06/21) Code of PVTv2 is released! PVTv2 largely improves PVTv1 and works better than Swin Transformer with ImageNet-1K pre-training. Pyr

1.3k Jan 04, 2023
The official PyTorch code for NeurIPS 2021 ML4AD Paper, "Does Thermal data make the detection systems more reliable?"

MultiModal-Collaborative (MMC) Learning Framework for integrating RGB and Thermal spectral modalities This is the official code for NeurIPS 2021 Machi

NeurAI 12 Nov 02, 2022