Pneumonia Detection using machine learning - with PyTorch

Overview

Pneumonia Detection

Pneumonia Detection using machine learning.

Training was done in colab:

Training In Colab


DEMO:

gif

Result (Confusion Matrix):

confusion matrix

Data

I uploaded my dataset to kaggle I used a modified version of this dataset from kaggle. Instead of NORMAL and PNEUMONIA I split the PNEUMONIA dataset to BACTERIAL PNUEMONIA and VIRAL PNEUMONIA. This way the data is more evenly distributed and I can distinguish between viral and bacterial pneumonia. I also combined the validation dataset with the test dataset because the validation dataset only had 8 images per class.

This is the resulting distribution:

data distribution

Processing and Augmentation

I resized the images to 150x150 and because some images already were grayscale I also transformed all the images to grayscale.

Additionaly I applied the following transformations/augmentations on the training data:

transforms.Resize((150, 150)),
transforms.Grayscale(),
transforms.ToTensor(),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(45)

and those transformations on the test data:

transforms.Resize((150, 150)),
transforms.Grayscale(),
transforms.ToTensor(),

This is the resulting data:

sample images

I also used one-hot encoding for the labels!



Model

----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1         [-1, 16, 148, 148]             160
              ReLU-2         [-1, 16, 148, 148]               0
       BatchNorm2d-3         [-1, 16, 148, 148]              32
            Conv2d-4         [-1, 16, 146, 146]           2,320
              ReLU-5         [-1, 16, 146, 146]               0
       BatchNorm2d-6         [-1, 16, 146, 146]              32
         MaxPool2d-7           [-1, 16, 73, 73]               0
            Conv2d-8           [-1, 32, 71, 71]           4,640
              ReLU-9           [-1, 32, 71, 71]               0
      BatchNorm2d-10           [-1, 32, 71, 71]              64
           Conv2d-11           [-1, 32, 69, 69]           9,248
             ReLU-12           [-1, 32, 69, 69]               0
      BatchNorm2d-13           [-1, 32, 69, 69]              64
        MaxPool2d-14           [-1, 32, 34, 34]               0
           Conv2d-15           [-1, 64, 32, 32]          18,496
             ReLU-16           [-1, 64, 32, 32]               0
      BatchNorm2d-17           [-1, 64, 32, 32]             128
           Conv2d-18           [-1, 64, 30, 30]          36,928
             ReLU-19           [-1, 64, 30, 30]               0
      BatchNorm2d-20           [-1, 64, 30, 30]             128
        MaxPool2d-21           [-1, 64, 15, 15]               0
           Conv2d-22          [-1, 128, 13, 13]          73,856
             ReLU-23          [-1, 128, 13, 13]               0
      BatchNorm2d-24          [-1, 128, 13, 13]             256
           Conv2d-25          [-1, 128, 11, 11]         147,584
             ReLU-26          [-1, 128, 11, 11]               0
      BatchNorm2d-27          [-1, 128, 11, 11]             256
        MaxPool2d-28            [-1, 128, 5, 5]               0
          Flatten-29                 [-1, 3200]               0
           Linear-30                 [-1, 4096]      13,111,296
             ReLU-31                 [-1, 4096]               0
          Dropout-32                 [-1, 4096]               0
           Linear-33                 [-1, 4096]      16,781,312
             ReLU-34                 [-1, 4096]               0
          Dropout-35                 [-1, 4096]               0
           Linear-36                    [-1, 3]          12,291
          Softmax-37                    [-1, 3]               0
================================================================
Total params: 30,199,091
Trainable params: 30,199,091
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.09
Forward/backward pass size (MB): 27.95
Params size (MB): 115.20
Estimated Total Size (MB): 143.24
----------------------------------------------------------------

Visualization using Streamlit

The webapp is not hosted because the model is too large. I'd have to host it on a server. This is just to visualize.

Owner
Wilhelm Berghammer
Artificial Intelligence Student @ JKU (1st year)
Wilhelm Berghammer
Incorporating Transformer and LSTM to Kalman Filter with EM algorithm

Deep learning based state estimation: incorporating Transformer and LSTM to Kalman Filter with EM algorithm Overview Kalman Filter requires the true p

zshicode 57 Dec 27, 2022
Deformable DETR is an efficient and fast-converging end-to-end object detector.

Deformable DETR: Deformable Transformers for End-to-End Object Detection.

2k Jan 05, 2023
Classify music genre from a 10 second sound stream using a Neural Network.

MusicGenreClassification Academic research in the field of Deep Learning (Deep Neural Networks) and Sound Processing, Tel Aviv University. Featured in

Matan Lachmish 453 Dec 27, 2022
Streamlit Tutorial (ex: stock price dashboard, cartoon-stylegan, vqgan-clip, stylemixing, styleclip, sefa)

Streamlit Tutorials Install pip install streamlit Run cd [directory] streamlit run app.py --server.address 0.0.0.0 --server.port [your port] # http:/

Jihye Back 30 Jan 06, 2023
Differentiable architecture search for convolutional and recurrent networks

Differentiable Architecture Search Code accompanying the paper DARTS: Differentiable Architecture Search Hanxiao Liu, Karen Simonyan, Yiming Yang. arX

Hanxiao Liu 3.7k Jan 09, 2023
Simulation-based inference for the Galactic Center Excess

Simulation-based inference for the Galactic Center Excess Siddharth Mishra-Sharma and Kyle Cranmer Abstract The nature of the Fermi gamma-ray Galactic

Siddharth Mishra-Sharma 3 Jan 21, 2022
LERP : Label-dependent and event-guided interpretable disease risk prediction using EHRs

LERP : Label-dependent and event-guided interpretable disease risk prediction using EHRs This is the code for the LERP. Dataset The dataset used is MI

5 Jun 18, 2022
StyleGAN2-ada for practice

This version of the newest PyTorch-based StyleGAN2-ada is intended mostly for fellow artists, who rarely look at scientific metrics, but rather need a working creative tool. Tested on Python 3.7 + Py

vadim epstein 170 Nov 16, 2022
Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation Paper Multi-Target Adversarial Frameworks for Domain Adaptation in

Valeo.ai 20 Jun 21, 2022
Justmagic - Use a function as a method with this mystic script, like in Nim

justmagic Use a function as a method with this mystic script, like in Nim. Just

witer33 8 Oct 08, 2022
Resources for our AAAI 2022 paper: "LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification".

LOREN Resources for our AAAI 2022 paper (pre-print): "LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification". DEMO System Check out o

Jiangjie Chen 37 Dec 27, 2022
Automatic self-diagnosis program (python required)Automatic self-diagnosis program (python required)

auto-self-checker 자동으로 자가진단 해주는 프로그램(python 필요) 중요 이 프로그램이 실행될때에는 절대로 마우스포인터를 움직이거나 키보드를 건드리면 안된다(화면인식, 마우스포인터로 직접 클릭) 사용법 프로그램을 구동할 폴더 내의 cmd창에서 pip

1 Dec 30, 2021
Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module

Invariant Point Attention - Pytorch Implementation of Invariant Point Attention as a standalone module, which was used in the structure module of Alph

Phil Wang 113 Jan 05, 2023
Code repository accompanying the paper "On Adversarial Robustness: A Neural Architecture Search perspective"

On Adversarial Robustness: A Neural Architecture Search perspective Preparation: Clone the repository: https://github.com/tdchaitanya/nas-robustness.g

Chaitanya Devaguptapu 4 Nov 10, 2022
[CVPR22] Official codebase of Semantic Segmentation by Early Region Proxy.

RegionProxy Figure 2. Performance vs. GFLOPs on ADE20K val split. Semantic Segmentation by Early Region Proxy Yifan Zhang, Bo Pang, Cewu Lu CVPR 2022

Yifan 54 Nov 29, 2022
Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks

This is an implementation of Volodymyr Mnih's dissertation methods on his Massachusetts road & building dataset and my original methods that are publi

Shunta Saito 255 Sep 07, 2022
The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection .

GCoNet The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection . Trained model Download final_gconet.pth

Qi Fan 46 Nov 17, 2022
Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

Neural Circuit Policies Enabling Auditable Autonomy Online access via SharedIt Neural Circuit Policies (NCPs) are designed sparse recurrent neural net

8 Jan 07, 2023
Code for reproducing key results in the paper "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets"

Status: Archive (code is provided as-is, no updates expected) InfoGAN Code for reproducing key results in the paper InfoGAN: Interpretable Representat

OpenAI 1k Dec 19, 2022
git《Investigating Loss Functions for Extreme Super-Resolution》(CVPR 2020) GitHub:

Investigating Loss Functions for Extreme Super-Resolution NTIRE 2020 Perceptual Extreme Super-Resolution Submission. Our method ranked first and secon

Sejong Yang 0 Oct 17, 2022