Pneumonia Detection using machine learning - with PyTorch

Overview

Pneumonia Detection

Pneumonia Detection using machine learning.

Training was done in colab:

Training In Colab


DEMO:

gif

Result (Confusion Matrix):

confusion matrix

Data

I uploaded my dataset to kaggle I used a modified version of this dataset from kaggle. Instead of NORMAL and PNEUMONIA I split the PNEUMONIA dataset to BACTERIAL PNUEMONIA and VIRAL PNEUMONIA. This way the data is more evenly distributed and I can distinguish between viral and bacterial pneumonia. I also combined the validation dataset with the test dataset because the validation dataset only had 8 images per class.

This is the resulting distribution:

data distribution

Processing and Augmentation

I resized the images to 150x150 and because some images already were grayscale I also transformed all the images to grayscale.

Additionaly I applied the following transformations/augmentations on the training data:

transforms.Resize((150, 150)),
transforms.Grayscale(),
transforms.ToTensor(),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(45)

and those transformations on the test data:

transforms.Resize((150, 150)),
transforms.Grayscale(),
transforms.ToTensor(),

This is the resulting data:

sample images

I also used one-hot encoding for the labels!



Model

----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1         [-1, 16, 148, 148]             160
              ReLU-2         [-1, 16, 148, 148]               0
       BatchNorm2d-3         [-1, 16, 148, 148]              32
            Conv2d-4         [-1, 16, 146, 146]           2,320
              ReLU-5         [-1, 16, 146, 146]               0
       BatchNorm2d-6         [-1, 16, 146, 146]              32
         MaxPool2d-7           [-1, 16, 73, 73]               0
            Conv2d-8           [-1, 32, 71, 71]           4,640
              ReLU-9           [-1, 32, 71, 71]               0
      BatchNorm2d-10           [-1, 32, 71, 71]              64
           Conv2d-11           [-1, 32, 69, 69]           9,248
             ReLU-12           [-1, 32, 69, 69]               0
      BatchNorm2d-13           [-1, 32, 69, 69]              64
        MaxPool2d-14           [-1, 32, 34, 34]               0
           Conv2d-15           [-1, 64, 32, 32]          18,496
             ReLU-16           [-1, 64, 32, 32]               0
      BatchNorm2d-17           [-1, 64, 32, 32]             128
           Conv2d-18           [-1, 64, 30, 30]          36,928
             ReLU-19           [-1, 64, 30, 30]               0
      BatchNorm2d-20           [-1, 64, 30, 30]             128
        MaxPool2d-21           [-1, 64, 15, 15]               0
           Conv2d-22          [-1, 128, 13, 13]          73,856
             ReLU-23          [-1, 128, 13, 13]               0
      BatchNorm2d-24          [-1, 128, 13, 13]             256
           Conv2d-25          [-1, 128, 11, 11]         147,584
             ReLU-26          [-1, 128, 11, 11]               0
      BatchNorm2d-27          [-1, 128, 11, 11]             256
        MaxPool2d-28            [-1, 128, 5, 5]               0
          Flatten-29                 [-1, 3200]               0
           Linear-30                 [-1, 4096]      13,111,296
             ReLU-31                 [-1, 4096]               0
          Dropout-32                 [-1, 4096]               0
           Linear-33                 [-1, 4096]      16,781,312
             ReLU-34                 [-1, 4096]               0
          Dropout-35                 [-1, 4096]               0
           Linear-36                    [-1, 3]          12,291
          Softmax-37                    [-1, 3]               0
================================================================
Total params: 30,199,091
Trainable params: 30,199,091
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.09
Forward/backward pass size (MB): 27.95
Params size (MB): 115.20
Estimated Total Size (MB): 143.24
----------------------------------------------------------------

Visualization using Streamlit

The webapp is not hosted because the model is too large. I'd have to host it on a server. This is just to visualize.

Owner
Wilhelm Berghammer
Artificial Intelligence Student @ JKU (1st year)
Wilhelm Berghammer
Demonstrational Session git repo for H SAF User Workshop (28/1)

5th H SAF User Workshop The 5th H SAF User Workshop supported by EUMeTrain will be held in online in January 24-28 2022. This repository contains inst

H SAF 4 Aug 04, 2022
Deep Learning for Natural Language Processing SS 2021 (TU Darmstadt)

Deep Learning for Natural Language Processing SS 2021 (TU Darmstadt) Task Training huge unsupervised deep neural networks yields to strong progress in

2 Aug 05, 2022
PyTorch implementation of Octave Convolution with pre-trained Oct-ResNet and Oct-MobileNet models

octconv.pytorch PyTorch implementation of Octave Convolution in Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octa

Duo Li 273 Dec 18, 2022
SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems

The SLIDE package contains the source code for reproducing the main experiments in this paper. Dataset The Datasets can be downloaded in Amazon-

Intel Labs 72 Dec 16, 2022
Learning Representational Invariances for Data-Efficient Action Recognition

Learning Representational Invariances for Data-Efficient Action Recognition Official PyTorch implementation for Learning Representational Invariances

Virginia Tech Vision and Learning Lab 27 Nov 22, 2022
Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT

CheXbert: Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT CheXbert is an accurate, automated dee

Stanford Machine Learning Group 51 Dec 08, 2022
Python scripts form performing stereo depth estimation using the high res stereo model in PyTorch .

PyTorch-High-Res-Stereo-Depth-Estimation Python scripts form performing stereo depth estimation using the high res stereo model in PyTorch. Stereo dep

Ibai Gorordo 26 Nov 24, 2022
PyZebrascope - an open-source Python platform for brain-wide neural activity imaging in behaving zebrafish

PyZebrascope - an open-source Python platform for brain-wide neural activity imaging in behaving zebrafish

1 May 31, 2022
Deep learning (neural network) based remote photoplethysmography: how to extract pulse signal from video using deep learning tools

Deep-rPPG: Camera-based pulse estimation using deep learning tools Deep learning (neural network) based remote photoplethysmography: how to extract pu

Terbe Dániel 138 Dec 17, 2022
Code accompanying the paper Shared Independent Component Analysis for Multi-subject Neuroimaging

ShICA Code accompanying the paper Shared Independent Component Analysis for Multi-subject Neuroimaging Install Move into the ShICA directory cd ShICA

8 Nov 07, 2022
TensorFlow implementation of "Variational Inference with Normalizing Flows"

[TensorFlow 2] Variational Inference with Normalizing Flows TensorFlow implementation of "Variational Inference with Normalizing Flows" [1] Concept Co

YeongHyeon Park 7 Jun 08, 2022
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.

This is a curated list of tutorials, projects, libraries, videos, papers, books and anything related to the incredible PyTorch. Feel free to make a pu

Ritchie Ng 9.2k Jan 02, 2023
Data reduction pipeline for KOALA on the AAT.

KOALA KOALA, the Kilofibre Optical AAT Lenslet Array, is a wide-field, high efficiency, integral field unit used by the AAOmega spectrograph on the 3.

4 Sep 26, 2022
Code Release for the paper "TriBERT: Full-body Human-centric Audio-visual Representation Learning for Visual Sound Separation"

TriBERT This repository contains the code for the NeurIPS 2021 paper titled "TriBERT: Full-body Human-centric Audio-visual Representation Learning for

UBC Computer Vision Group 8 Aug 31, 2022
CT Based COVID 19 Diagnose by Image Processing and Deep Learning

This project proposed the deep learning and image processing method to undertake the diagnosis on 2D CT image and 3D CT volume.

1 Feb 08, 2022
Robotics environments

Robotics environments Details and documentation on these robotics environments are available in OpenAI's blog post and the accompanying technical repo

Farama Foundation 121 Dec 28, 2022
Streamlit app demonstrating an image browser for the Udacity self-driving-car dataset with realtime object detection using YOLO.

Streamlit Demo: The Udacity Self-driving Car Image Browser This project demonstrates the Udacity self-driving-car dataset and YOLO object detection in

Streamlit 992 Jan 04, 2023
Implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

SemCo The official pytorch implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

42 Nov 14, 2022
Here I will explain the flow to deploy your custom deep learning models on Ultra96V2.

Xilinx_Vitis_AI This repo will help you to Deploy your Deep Learning Model on Ultra96v2 Board. Prerequisites Vitis Core Development Kit 2019.2 This co

Amin Mamandipoor 1 Feb 08, 2022
[ACMMM 2021 Oral] Enhanced Invertible Encoding for Learned Image Compression

InvCompress Official Pytorch Implementation for "Enhanced Invertible Encoding for Learned Image Compression", ACMMM 2021 (Oral) Figure: Our framework

96 Nov 30, 2022