The official repository for "Revealing unforeseen diagnostic image features with deep learning by detecting cardiovascular diseases from apical four-chamber ultrasounds"

Overview

Revealing unforeseen diagnostic image features with deep learning by detecting cardiovascular diseases from apical four-chamber ultrasounds

image In this project, we aimed to develop a deep learning (DL) method to automatically detect impaired left ventricular (LV) function and aortic valve (AV) regurgitation from apical four-chamber (A4C) ultrasound cineloops. Two R(2+1)D convolutional neural networks (CNNs) were trained to detect the respective diseases. Subsequently, tSNE was used to visualize the embedding of the extracted feature vectors, and DeepLIFT was used to identify important image features associated with the diagnostic tasks.

The why

  • An automated echocardiography interpretation method requiring only limited views as input, say A4C, could make cardiovascular disease diagnosis more accessible.

    • Such system could become beneficial in geographic regions with limited access to expert cardiologists and sonographers.
    • It could also support general practitioners in the management of patients with suspected CVD, facilitating timely diagnosis and treatment of patients.
  • If the trained CNN can detect the diseases based on limited information, how?

    • Especially, AV regurgitation is typically diagnosed based on color Doppler images using one or more viewpoints. When given only the A4C view, would the model be able to detect regurgitation? If so, what image features does the model use to make the distinction? Since it’s on the A4C view, would the model identify some anatomical structure or movement associated with regurgitation, which are typically not being considered in conventional image interpretation? This is what we try to find out in the study.

Image features associated with the diagnostic tasks

DeepLIFT attributes a model’s classification output to certain input features (pixels), which allows us to understand which region or frame in an ultrasound is the key that makes the model classify it as a certain diagnosis. Below are some example analyses.

Representative normal cases

Case Averaged logit Input clip / Impaired LV function model's focus / AV regurgitation model's focus
Normal1 0.9999 image
Normal2 0.9999 image
Normal3 0.9999 image
Normal4 0.9999 image
Normal5 0.9999 image
Normal6 0.9999 image
Normal7 0.9998 image
Normal8 0.9998 image
Normal9 0.9998 image
Normal10 0.9997 image

DeepLIFT analyses reveal that the LV myocardium and mitral valve were important for detecting impaired LV function, while the tip of the mitral valve anterior leaflet, during opening, was considered important for detecting AV regurgitation. Apart from the above examples, all confident cases are provided, which the predicted probability of being the normal class by the two models are both higher than 0.98. See the full list here.

Representative disease cases

  • Mildly impaired LV
Case Logit Input clip / Impaired LV function model's focus
MildILV1 0.9989 image
MildILV2 0.9988 image
  • Severely impaired LV
Case Logit Input clip / Impaired LV function model's focus
SevereILV1 1.0000 image
SevereILV2 1.0000 image
  • Mild AV regurgitation
Case Logit Input clip / AV regurgitation model's focus
MildAVR1 0.7240 image
MildAVR2 0.6893 image
  • Substantial AV regurgitation
Case Logit Input clip / AV regurgitation model's focus
SubstantialAVR1 0.9919 image
SubstantialAVR2 0.9645 image

When analyzing disease cases, the highlighted regions in different queries are quite different. We speculate that this might be due to a higher heterogeneity in the appearance of the disease cases. Apart from the above examples, more confident disease cases are provided. See the full list here.

Run the code on your own dataset

The dataloader in util can be modified to fit your own dataset. To run the full workflow, namely training, validation, testing, and the subsequent analyses, simply run the following commands:

git clone https://github.com/LishinC/Disease-Detection-and-Diagnostic-Image-Feature.git
cd Disease-Detection-and-Diagnostic-Image-Feature/util
pip install -e .
cd ../projectDDDIF
python main.py

Loading the trained model weights

The model weights are made available for external validation, or as pretraining for other echocardiography-related tasks. To load the weights, navigate to the projectDDDIF folder, and run the following python code:

import torch
import torch.nn as nn
import torchvision

#Load impaired LV model
model_path = 'model/impairedLV/train/model_val_min.pth'
# #Load AV regurgitation model
# model_path = 'model/regurg/train/model_val_min.pth'

model = torchvision.models.video.__dict__["r2plus1d_18"](pretrained=False)
model.stem[0] = nn.Conv3d(1, 45, kernel_size=(1, 7, 7), stride=(1, 2, 2), padding=(0, 3, 3), bias=False)
model.fc = nn.Linear(model.fc.in_features, 3)
model.load_state_dict(torch.load(model_path))

Questions and feedback

For techinical problems or comments about the project, feel free to contact [email protected].

Self-describing JSON-RPC services made easy

ReflectRPC Self-describing JSON-RPC services made easy Contents What is ReflectRPC? Installation Features Datatypes Custom Datatypes Returning Errors

Andreas Heck 31 Jul 16, 2022
CSAC - Collaborative Semantic Aggregation and Calibration for Separated Domain Generalization

CSAC Introduction This repository contains the implementation code for paper: Co

ScottYuan 5 Jul 22, 2022
This respository includes implementations on Manifoldron: Direct Space Partition via Manifold Discovery

Manifoldron: Direct Space Partition via Manifold Discovery This respository includes implementations on Manifoldron: Direct Space Partition via Manifo

dayang_wang 4 Apr 28, 2022
PyStan, a Python interface to Stan, a platform for statistical modeling. Documentation: https://pystan.readthedocs.io

PyStan NOTE: This documentation describes a BETA release of PyStan 3. PyStan is a Python interface to Stan, a package for Bayesian inference. Stan® is

Stan 229 Dec 29, 2022
Контрольная работа по математическим методам машинного обучения

ML-MathMethods-Test Контрольная работа по математическим методам машинного обучения. Вычисление основных статистик, диаграмм и графиков, проверка разл

Stas Ivanovskii 1 Jan 06, 2022
A package related to building quasi-fibration symmetries

qf A package related to building quasi-fibration symmetries. If you'd like to learn more about how it works, see the brief explanation and References

Paolo Boldi 1 Dec 01, 2021
a pytorch implementation of auto-punctuation learned character by character

Learning Auto-Punctuation by Reading Engadget Articles Link to Other of my work 🌟 Deep Learning Notes: A collection of my notes going from basic mult

Ge Yang 137 Nov 09, 2022
CAST: Character labeling in Animation using Self-supervision by Tracking

CAST: Character labeling in Animation using Self-supervision by Tracking (Published as a conference paper at EuroGraphics 2022) Note: The CAST paper c

15 Nov 18, 2022
Hierarchical Motion Encoder-Decoder Network for Trajectory Forecasting (HMNet)

Hierarchical Motion Encoder-Decoder Network for Trajectory Forecasting (HMNet) Our paper: https://arxiv.org/abs/2111.13324 We will release the complet

15 Oct 17, 2022
Amazing-Python-Scripts - 🚀 Curated collection of Amazing Python scripts from Basics to Advance with automation task scripts.

📑 Introduction A curated collection of Amazing Python scripts from Basics to Advance with automation task scripts. This is your Personal space to fin

Avinash Ranjan 1.1k Dec 29, 2022
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

ColossalAI An integrated large-scale model training system with efficient parallelization techniques Installation PyPI pip install colossalai Install

HPC-AI Tech 7.1k Jan 03, 2023
Source code for paper "Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling", AAAI 2021

ATLOP Code for AAAI 2021 paper Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling. If you make use of this co

Wenxuan Zhou 146 Nov 29, 2022
PRIME: A Few Primitives Can Boost Robustness to Common Corruptions

PRIME: A Few Primitives Can Boost Robustness to Common Corruptions This is the official repository of PRIME, the data agumentation method introduced i

Apostolos Modas 34 Oct 30, 2022
AI-Bot - 一个基于watermelon改造的OpenAI-GPT-2的智能机器人

AI-Bot 一个基于watermelon改造的OpenAI-GPT-2的智能机器人 在Binder上直接运行测试 目前有两种实现方式 TF2的GPT-2 TF

9 Nov 16, 2022
Multiview Dataset Toolkit

Multiview Dataset Toolkit Using multi-view cameras is a natural way to obtain a complete point cloud. However, there is to date only one multi-view 3D

11 Dec 22, 2022
Inhomogeneous Social Recommendation with Hypergraph Convolutional Networks

Inhomogeneous Social Recommendation with Hypergraph Convolutional Networks This is our Pytorch implementation for the paper: Zirui Zhu, Chen Gao, Xu C

Zirui Zhu 3 Dec 30, 2022
ALBERT-pytorch-implementation - ALBERT pytorch implementation

ALBERT-pytorch-implementation developing... 모델의 개념이해를 돕기 위한 구현물로 현재 변수명을 상세히 적었고

BG Kim 3 Oct 06, 2022
Flower - A Friendly Federated Learning Framework

Flower - A Friendly Federated Learning Framework Flower (flwr) is a framework for building federated learning systems. The design of Flower is based o

Adap 1.8k Jan 01, 2023
PyTorch implementation of "Image-to-Image Translation Using Conditional Adversarial Networks".

pix2pix-pytorch PyTorch implementation of Image-to-Image Translation Using Conditional Adversarial Networks. Based on pix2pix by Phillip Isola et al.

mrzhu 383 Dec 17, 2022
Code for "Multi-Time Attention Networks for Irregularly Sampled Time Series", ICLR 2021.

Multi-Time Attention Networks (mTANs) This repository contains the PyTorch implementation for the paper Multi-Time Attention Networks for Irregularly

The Laboratory for Robust and Efficient Machine Learning 68 Dec 17, 2022