Official Pytorch implementation of Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference (ICLR 2022)

Related tags

Deep Learningi-Blurry
Overview

The Official Implementation of CLIB (Continual Learning for i-Blurry)

Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference
Hyunseo Koh*, Dahyun Kim*, Jung-Woo Ha, Jonghyun Choi
ICLR 2022 [Paper]
(* indicates equal contribution)

Overview

Abstract

Despite rapid advances in continual learning, a large body of research is devoted to improving performance in the existing setups. While a handful of work do propose new continual learning setups, they still lack practicality in certain aspects. For better practicality, we first propose a novel continual learning setup that is online, task-free, class-incremental, of blurry task boundaries and subject to inference queries at any moment. We additionally propose a new metric to better measure the performance of the continual learning methods subject to inference queries at any moment. To address the challenging setup and evaluation protocol, we propose an effective method that employs a new memory management scheme and novel learning techniques. Our empirical validation demonstrates that the proposed method outperforms prior arts by large margins.

Results

Results of CL methods on various datasets, for online continual learning on i-Blurry-50-10 split, measured by metric. For more details, please refer to our paper.

Methods CIFAR10 CIFAR100 TinyImageNet ImageNet
EWC++ 57.34±2.10 35.35±1.96 22.26±1.15 24.81
BiC 58.38±0.54 33.51±3.04 22.80±0.94 27.41
ER-MIR 57.28±2.43 35.35±1.41 22.10±1.14 20.48
GDumb 53.20±1.93 32.84±0.45 18.17±0.19 14.41
RM 23.00±1.43 8.63±0.19 5.74±0.30 6.22
Baseline-ER 57.46±2.25 35.61±2.08 22.45±1.15 25.16
CLIB 70.26±1.28 46.67±0.79 23.87±0.68 28.16

Getting Started

To set up the environment for running the code, you can either use the docker container, or manually install the requirements in a virtual environment.

Using Docker Container (Recommended)

We provide the Docker image khs8157/iblurry on Docker Hub for reproducing the results. To download the docker image, run the following command:

docker pull khs8157/iblurry:latest

After pulling the image, you may run the container via following command:

docker run --gpus all -it --shm-size=64gb -v /PATH/TO/CODE:/PATH/TO/CODE --name=CONTAINER_NAME khs8157/iblurry:latest bash

Replace the arguments written in italic with your own arguments.

Requirements

  • Python3
  • Pytorch (>=1.9)
  • torchvision (>=0.10)
  • numpy
  • pillow~=6.2.1
  • torch_optimizer
  • randaugment
  • easydict
  • pandas~=1.1.3

If not using Docker container, install the requirements using the following command

pip install -r requirements.txt

Running Experiments

Downloading the Datasets

CIFAR10, CIFAR100, and TinyImageNet can be downloaded by running the corresponding scripts in the dataset/ directory. ImageNet dataset can be downloaded from Kaggle.

Experiments Using Shell Script

Experiments for the implemented methods can be run by executing the shell scripts provided in scripts/ directory. For example, you may run CL experiments using CLIB method by

bash scripts/clib.sh

You may change various arguments for different experiments.

  • NOTE: Short description of the experiment. Experiment result and log will be saved at results/DATASET/NOTE.
    • WARNING: logs/results with the same dataset and note will be overwritten!
  • MODE: CL method to be applied. Methods implemented in this version are: [clib, er, ewc++, bic, mir, gdumb, rm]
  • DATASET: Dataset to use in experiment. Supported datasets are: [cifar10, cifar100, tinyimagenet, imagenet]
  • N_TASKS: Number of tasks. Note that corresponding json file should exist in collections/ directory.
  • N: Percentage of disjoint classes in i-blurry split. N=100 for full disjoint, N=0 for full blurry. Note that corresponding json file should exist in collections/ directory.
  • M: Blurry ratio of blurry classes in i-blurry split. Note that corresponding json file should exist in collections/ directory.
  • GPU_TRANSFORM: Perform AutoAug on GPU, for faster running.
  • USE_AMP: Use automatic mixed precision (amp), for faster running and reducing memory cost.
  • MEM_SIZE: Maximum number of samples in the episodic memory.
  • ONLINE_ITER: Number of model updates per sample.
  • EVAL_PERIOD: Period of evaluation queries, for calculating .

Citation

If you used our code or i-blurry setup, please cite our paper.

@inproceedings{koh2022online,
  title={Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference},
  author={Koh, Hyunseo and Kim, Dahyun and Ha, Jung-Woo and Choi, Jonghyun},
  booktitle={ICLR},
  year={2022}
}

License

Copyright (C) 2022-present NAVER Corp.

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <https://www.gnu.org/licenses/>.
Owner
NAVER AI
Official account of NAVER CLOVA AI Lab, Korea No.1 Industrial AI Research Group
NAVER AI
kullanışlı ve işinizi kolaylaştıracak bir araç

Hey merhaba! işte çok sorulan sorularının cevabı ve sorunlarının çözümü; Soru= İçinde var denilen birçok şeyi göremiyorum bunun sebebi nedir? Cevap= B

Sexettin 16 Dec 17, 2022
Object-Centric Learning with Slot Attention

Slot Attention This is a re-implementation of "Object-Centric Learning with Slot Attention" in PyTorch (https://arxiv.org/abs/2006.15055). Requirement

Untitled AI 72 Jan 02, 2023
Y. Zhang, Q. Yao, W. Dai, L. Chen. AutoSF: Searching Scoring Functions for Knowledge Graph Embedding. IEEE International Conference on Data Engineering (ICDE). 2020

AutoSF The code for our paper "AutoSF: Searching Scoring Functions for Knowledge Graph Embedding" and this paper has been accepted by ICDE2020. News:

AutoML Research 64 Dec 17, 2022
Code for Deep Single-image Portrait Image Relighting

Deep Single-Image Portrait Relighting [Project Page] Hao Zhou, Sunil Hadap, Kalyan Sunkavalli, David W. Jacobs. In ICCV, 2019 Overview Test script for

438 Jan 05, 2023
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Mayur 119 Nov 24, 2022
A simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.

This is a simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.

crispengari 3 Jan 08, 2022
[ICCV 2021] Official Tensorflow Implementation for "Single Image Defocus Deblurring Using Kernel-Sharing Parallel Atrous Convolutions"

KPAC: Kernel-Sharing Parallel Atrous Convolutional block This repository contains the official Tensorflow implementation of the following paper: Singl

Hyeongseok Son 50 Dec 29, 2022
We propose a new method for effective shadow removal by regarding it as an exposure fusion problem.

Auto-exposure fusion for single-image shadow removal We propose a new method for effective shadow removal by regarding it as an exposure fusion proble

Qing Guo 146 Dec 31, 2022
This is a simple backtesting framework to help you test your crypto currency trading. It includes a way to download and store historical crypto data and to execute a trading strategy.

You can use this simple crypto backtesting script to ensure your trading strategy is successful Minimal setup required and works well with static TP a

Andrei 154 Sep 12, 2022
The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction".

LEAR The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction". See below for an overview of

杨攀 93 Jan 07, 2023
The repository for the paper "When Do You Need Billions of Words of Pretraining Data?"

pretraining-learning-curves This is the repository for the paper When Do You Need Billions of Words of Pretraining Data? Edge Probing We use jiant1 fo

ML² AT CILVR 19 Nov 25, 2022
Materials for upcoming beginner-friendly PyTorch course (work in progress).

Learn PyTorch for Deep Learning (work in progress) I'd like to learn PyTorch. So I'm going to use this repo to: Add what I've learned. Teach others in

Daniel Bourke 2.3k Dec 29, 2022
Solution to the Weather4cast 2021 challenge

This code was used for the entry by the team "antfugue" for the Weather4cast 2021 Challenge. Below, you can find the instructions for generating predi

Jussi Leinonen 13 Jan 03, 2023
This project uses Template Matching technique for object detecting by detection of template image over base image.

Object Detection Project Using OpenCV This project uses Template Matching technique for object detecting by detection the template image over base ima

Pratham Bhatnagar 7 May 29, 2022
High-resolution networks and Segmentation Transformer for Semantic Segmentation

High-resolution networks and Segmentation Transformer for Semantic Segmentation Branches This is the implementation for HRNet + OCR. The PyTroch 1.1 v

HRNet 2.8k Jan 07, 2023
CARL provides highly configurable contextual extensions to several well-known RL environments.

CARL (context adaptive RL) provides highly configurable contextual extensions to several well-known RL environments.

AutoML-Freiburg-Hannover 51 Dec 28, 2022
PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and reinforcement learning

safe-control-gym Physics-based CartPole and Quadrotor Gym environments (using PyBullet) with symbolic a priori dynamics (using CasADi) for learning-ba

Dynamic Systems Lab 300 Dec 28, 2022
DeepLab-ResNet rebuilt in TensorFlow

DeepLab-ResNet-TensorFlow This is an (re-)implementation of DeepLab-ResNet in TensorFlow for semantic image segmentation on the PASCAL VOC dataset. Fr

Vladimir 1.2k Nov 04, 2022
Paddle implementation for "Cross-Lingual Word Embedding Refinement by ℓ1 Norm Optimisation" (NAACL 2021)

L1-Refinement Paddle implementation for "Cross-Lingual Word Embedding Refinement by ℓ1 Norm Optimisation" (NAACL 2021) 🙈 A more detailed readme is co

Lincedo Lab 4 Jun 09, 2021
[PAMI 2020] Show, Match and Segment: Joint Weakly Supervised Learning of Semantic Matching and Object Co-segmentation

Show, Match and Segment: Joint Weakly Supervised Learning of Semantic Matching and Object Co-segmentation This repository contains the source code for

Yun-Chun Chen 60 Nov 25, 2022