Incremental Cross-Domain Adaptation for Robust Retinopathy Screening via Bayesian Deep Learning

Overview

Incremental Cross-Domain Adaptation for Robust Retinopathy Screening via Bayesian Deep Learning

Update (September 18th, 2021)

A supporting document describing the difference between transfer learning, incremental learning, domain adaptation, and the proposed incremental cross-domain adaptation approach has been uploaded in this repository.

Update (August 15th, 2021)

Blind Testing Dataset has been released.

Introduction

This repository contains an implementation of the continual learning loss function (driven via Bayesian inference) to penalize the deep classification networks for incrementally learning the diverse ranging classification tasks across various domain shifts.

CL

Installation

To run the codebase, please download and install Anaconda (also install MATLAB R2020a with deep learning, image processing and computer vision toolboxes). Afterward, please import the ‘environment.yml’ or alternatively install following packages:

  1. Python 3.7.9
  2. TensorFlow 2.1.0 (CUDA compatible GPU needed for GPU training)
  3. Keras 2.3.0 or above
  4. OpenCV 4.2
  5. Imgaug 0.2.9 or above
  6. Tqdm
  7. Pandas
  8. Pillow 8.2.0

Both Linux and Windows OS are supported.

Datasets

The datasets used in the paper can be downloaded from the following URLs:

  1. Rabbani
  2. BIOMISA
  3. Zhang
  4. Duke-I
  5. Duke-II
  6. Duke-III
  7. Blind Testing Dataset

The datasets description file is also uploaded here. Moreover, please follow the same steps as mentioned below to prepare the training and testing data. These steps are also applicable for any custom dataset. Please note that in this research, the disease severity within the scans of all the above-mentioned datasets are marked by multiple expert ophthalmologists. These annotations are also released publicly in this repository.

Dataset Preparation

  1. Download the desired data and put the training images in '…\datasets\trainK' folder (where K indicates the iteration).
  2. The directory structure is given below:
├── datasets
│   ├── test
│   │   └── test_image_1.png
│   │   └── test_image_2.png
│   │   ...
│   │   └── test_image_n.png
│   ├── train1
│   │   └── train_image_1.png
│   │   └── train_image_2.png
│   │   ...
│   │   └── train_image_m.png
│   ├── train2
│   │   └── train_image_1.png
│   │   └── train_image_2.png
│   │   ...
│   │   └── train_image_j.png
│   ...
│   ├── trainK
│   │   └── train_image_1.png
│   │   └── train_image_2.png
│   │   ...
│   │   └── train_image_o.png

Training and Testing

  1. Use ‘trainer.py’ to train the chosen model incrementally. After each iteration, the learned representations are saved in a h5 file.
  2. After training the model instances, use ‘tester.py’ to generate the classification results.
  3. Use ‘confusionMatrix.m’ to view the obtained results.

Results

The detailed results of the proposed framework on all the above-mentioned datasets are stored in the 'results.mat' file.

Citation

If you use the proposed scheme (or any part of this code in your research), please cite the following paper:

@inproceedings{BayesianIDA,
  title   = {Incremental Cross-Domain Adaptation for Robust Retinopathy Screening via Bayesian Deep Learning},
  author  = {Taimur Hassan and Bilal Hassan and Muhammad Usman Akram and Shahrukh Hashmi and Abdul Hakeem and Naoufel Werghi},
  note = {IEEE Transactions on Instrumentation and Measurement},
  year = {2021}
}

Contact

If you have any query, please feel free to contact us at: [email protected].

Owner
Taimur Hassan
Taimur Hassan
Implementation of the method described in the Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

4 Mar 11, 2022
Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021

Introduction Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021 Prerequisites Python 3.8 and conda, get Conda CUDA 11

51 Dec 03, 2022
[ICCV 2021] Focal Frequency Loss for Image Reconstruction and Synthesis

Focal Frequency Loss - Official PyTorch Implementation This repository provides the official PyTorch implementation for the following paper: Focal Fre

Liming Jiang 460 Jan 04, 2023
Official implementation for "Style Transformer for Image Inversion and Editing" (CVPR 2022)

Style Transformer for Image Inversion and Editing (CVPR2022) https://arxiv.org/abs/2203.07932 Existing GAN inversion methods fail to provide latent co

Xueqi Hu 153 Dec 02, 2022
Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

730 Jan 09, 2023
Locationinfo - A script helps the user to show network information such as ip address

Description This script helps the user to show network information such as ip ad

Roxcoder 1 Dec 30, 2021
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

185 Dec 26, 2022
This repository contains the source code of Auto-Lambda and baselines from the paper, Auto-Lambda: Disentangling Dynamic Task Relationships.

Auto-Lambda This repository contains the source code of Auto-Lambda and baselines from the paper, Auto-Lambda: Disentangling Dynamic Task Relationship

Shikun Liu 76 Dec 20, 2022
Privacy as Code for DSAR Orchestration: Privacy Request automation to fulfill GDPR, CCPA, and LGPD data subject requests.

Meet Fidesops: Privacy as Code for DSAR Orchestration A part of the greater Fides ecosystem. ⚡ Overview Fidesops (fee-dez-äps, combination of the Lati

Ethyca 44 Dec 06, 2022
This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021.

PyTorch implementation of DAQ This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021. For more informatio

CV Lab @ Yonsei University 36 Nov 04, 2022
Unofficial keras(tensorflow) implementation of MAE model from Masked Autoencoders Are Scalable Vision Learners

MAE-keras Unofficial keras(tensorflow) implementation of MAE model described in 'Masked Autoencoders Are Scalable Vision Learners'. This work has been

Yewon 11 Jun 12, 2022
NEO: Non Equilibrium Sampling on the orbit of a deterministic transform

NEO: Non Equilibrium Sampling on the orbit of a deterministic transform Description of the code This repo describes the NEO estimator described in the

0 Dec 01, 2021
A hobby project which includes a hand-gesture based virtual piano using a mobile phone camera and OpenCV library functions

Overview This is a hobby project which includes a hand-gesture controlled virtual piano using an android phone camera and some OpenCV library. My moti

Abhinav Gupta 1 Nov 19, 2021
Analyses of the individual electric field magnitudes with Roast.

Aloi Davide - PhD Student (UoB) Analysis of electric field magnitudes (wp2a dataset only at the moment) and correlation analysis with Dynamic Causal M

Davide Aloi 7 Dec 15, 2022
Optimized primitives for collective multi-GPU communication

NCCL Optimized primitives for inter-GPU communication. Introduction NCCL (pronounced "Nickel") is a stand-alone library of standard communication rout

NVIDIA Corporation 2k Jan 09, 2023
This is the pytorch re-implementation of the IterNorm

IterNorm-pytorch Pytorch reimplementation of the IterNorm methods, which is described in the following paper: Iterative Normalization: Beyond Standard

Lei Huang 32 Dec 27, 2022
Pytorch Lightning Distributed Accelerators using Ray

Distributed PyTorch Lightning Training on Ray This library adds new PyTorch Lightning plugins for distributed training using the Ray distributed compu

167 Jan 02, 2023
Instant neural graphics primitives: lightning fast NeRF and more

Instant Neural Graphics Primitives Ever wanted to train a NeRF model of a fox in under 5 seconds? Or fly around a scene captured from photos of a fact

NVIDIA Research Projects 10.6k Jan 01, 2023
Pydantic models for pywttr and aiopywttr.

Pydantic models for pywttr and aiopywttr.

Almaz 2 Dec 08, 2022
一个目标检测的通用框架(不需要cuda编译),支持Yolo全系列(v2~v5)、EfficientDet、RetinaNet、Cascade-RCNN等SOTA网络。

一个目标检测的通用框架(不需要cuda编译),支持Yolo全系列(v2~v5)、EfficientDet、RetinaNet、Cascade-RCNN等SOTA网络。

Haoyu Xu 203 Jan 03, 2023