Capsule endoscopy detection DACON challenge

Overview

capsule_endoscopy_detection (DACON Challenge)

Overview

  • Yolov5, Yolor, mmdetection기반의 모델을 사용 (총 11개 모델 앙상블)
    • 모든 모델은 학습 시 Pretrained Weight을 yolov5, yolor, mmdetection 및 swin transformer github로부터 받아서 사용
    • 각 방식에 필요한 형태로 데이터의 format 변경
  • Train set과 Validation set을 나누어 진행
  • 총 11개의 결과를 앙상블
    • detectors_casacde_rcnn_resnet50_multiscale, retinanet_swin-l, retinanet_swin-l_multiscale, retinanet_swin-t, atss_swin-l_multiscale, faster_rcnn-swin-l_multiscale, yolor_tta_multiscale, yolov5x, yolov5x_tta, yolov5x_tta_multiscale
    • Weighted boxes fusion (WBF) 방식으로 앙상블 진행 (Iou threshold = 0.4)
    • 모델에 관한 보다 자세한 내용은 /all_steps 폴더 내에 STEP2로 시작하는 .sh 스크립트들에 적힌 주석을 참고해주세요!

환경(env) 세팅

  • 실험 환경: Ubuntu 18.04, Cuda 11.3, Anaconda3, Python 3.8
  1. git clone ( + 폴더 권한 설정)
git clone https://github.com/MAILAB-Yonsei/capsule_endoscopy_detection.git
chmod -R 777 capsule_endoscopy_detection
cd capsule_endoscopy_detection
  1. cbnet만 제외한 나머지에 대한 env 생성 (all_except_cbnet)
conda create -n all_except_cbnet python=3.8
conda activate all_except_cbnet
pytorch 설치 (ex. conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch)
pip install openmim
mim install mmdet
pip install -r requirements_all_except_cbnet.txt
conda deactivate
  1. cbnet에 대한 env 생성 (cbnet)
conda create -n cbnet python=3.8
conda activate cbnet
pytorch 설치 (ex. conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch)
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
     (ex. pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10.0/index.html)
cd UniverseNet
pip install -r requirements/build.txt
pip install -v -e .  # or "python setup.py develop"
pip install instaboostfast
pip install git+https://github.com/cocodataset/panopticapi.git
pip install git+https://github.com/lvis-dataset/lvis-api.git
pip install albumentations>=0.3.2 --no-binary imgaug,albumentations
pip install pandas
pip install tqdm
pip install shapely
conda deactivate
cd ..

main code 실행

[각 STEP 별로 자세한 설명은 /all_steps 폴더 내의 각각의 .sh 파일에 적힌 주석을 참고해주세요!]

STEP0. data root path 지정

cd all_steps
gedit data_path.txt

data_path.txt 파일에 data의 절대 경로를 명시한다!!! (ex. /mnt/data)

STEP1. data preparation (약 20~30분 소요)

conda activate all_except_cbnet
bash STEP1_data_preparation.sh

STEP2. 각 모델을 학습시킨다. (pretrained 모델로 inference만 하고자 한다면 바로 STEP3로!)

  • cbnet만 제외한 나머지에 대한 Training
conda activate all_except_cbnet
bash STEP2_train_model1_atss_swin-l_ms.sh
bash STEP2_train_model2_detectors_cascade_rcnn_r50_ms.sh
bash STEP2_train_model3_faster_rcnn_swin-l_ms.sh
bash STEP2_train_model4_retinanet_swin-l.sh
bash STEP2_train_model5_retinanet_swin-l_ms.sh
bash STEP2_train_model6_retinanet_swin-t_ms.sh
bash STEP2_train_model7_yolor.sh
bash STEP2_train_model8_yolo5x.sh
  • cbnet에 대한 Training
conda activate cbnet
bash STEP2_train_model9_cbnet_faster_rcnn_swin-l_ms.sh

STEP3. 모든 모델에 대해 Inference를 진행한다. (모델 하나당 20~30분 소요)

  • STEP2.를 건너뛰고 pretrained 모델에 대해 test를 하는 경우 아래 과정을 수행한 뒤 STEP3.의 명령어를 실행:
    • 아래의 weight 파일 링크에서 받은 mmdetection/ckpts 폴더를 /mmdetection 폴더 아래에 위치시킨다.
    • 아래의 weight 파일 링크에서 받은 UniverseNet/ckpts 폴더를 /UniverseNet 폴더 아래에 위치시킨다.
    • 아래의 weight 파일 링크에서 받은 YOLO/ckpts 폴더를 /YOLO 폴더 아래에 위치시킨다.
    • weight 파일 링크: https://drive.google.com/drive/folders/151KJC3FTUsK5mfx4TtNbhiFuuvLIeGz-?usp=sharing
  • cbnet만 제외한 나머지에 대한 Inference
conda activate all_except_cbnet
bash STEP3_inference_all_except_cbnet.sh
  • cbnet에 대한 Inference
conda activate cbnet
bash STEP3_inference_cbnet.sh

SETP4. 모든 모델에 대해 앙상블을 진행한다.

conda activate all_except_cbnet
bash STEP4_ensemble.sh
  • 최종 파일은 가장 상위 디렉토리에 'final.csv'로 생성!!!

주의사항

모두 순서에 맞게 코드를 구성해놓았기 때문에 하나의 코드를 2번 실행하는 등의 경우 진행에 어려움이 있을 수 있습니다. 참고해주세요.

현재 코드는 validation은 진행하지 않게 주석처리했습니다. 원하시면 predict.py에서 validation 주석처리를 풀고 val_answer.csv 파일의 경로를 설정해주시면 됩니다.

(predict.py 파일 위치: /mmdetection/predict/main.py, /UniverseNet/predict/main.py)

Owner
MAILAB
Medical Artificial Intelligence Laboratory at Yonsei University, Republic of Korea
MAILAB
Pytorch implementation of the popular Improv RNN model originally proposed by the Magenta team.

Pytorch Implementation of Improv RNN Overview This code is a pytorch implementation of the popular Improv RNN model originally implemented by the Mage

Sebastian Murgul 3 Nov 11, 2022
LSTM and QRNN Language Model Toolkit for PyTorch

LSTM and QRNN Language Model Toolkit This repository contains the code used for two Salesforce Research papers: Regularizing and Optimizing LSTM Langu

Salesforce 1.9k Jan 08, 2023
Malware Env for OpenAI Gym

Malware Env for OpenAI Gym Citing If you use this code in a publication please cite the following paper: Hyrum S. Anderson, Anant Kharkar, Bobby Fila

ENDGAME 563 Dec 29, 2022
Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting

Autoformer (NeurIPS 2021) Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting Time series forecasting is a c

THUML @ Tsinghua University 847 Jan 08, 2023
Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022
LIAO Shuiying 6 Dec 01, 2022
Vector Quantized Diffusion Model for Text-to-Image Synthesis

Vector Quantized Diffusion Model for Text-to-Image Synthesis Due to company policy, I have to set microsoft/VQ-Diffusion to private for now, so I prov

Shuyang Gu 294 Jan 05, 2023
using yolox+deepsort for object-tracker

YOLOX_deepsort_tracker yolox+deepsort实现目标跟踪 最新的yolox尝尝鲜~~(yolox正处在频繁更新阶段,因此直接链接yolox仓库作为子模块) Install Clone the repository recursively: git clone --rec

245 Dec 26, 2022
[3DV 2020] PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction

PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction International Conference on 3D Vision, 2020 Sai Sagar Jinka1, Rohan

Rohan Chacko 39 Oct 12, 2022
Code and data accompanying our SVRHM'21 paper.

Code and data accompanying our SVRHM'21 paper. Requires tensorflow 1.13, python 3.7, scikit-learn, and pytorch 1.6.0 to be installed. Python scripts i

5 Nov 17, 2021
Tensorflow 2 implementations of the C-SimCLR and C-BYOL self-supervised visual representation methods from "Compressive Visual Representations" (NeurIPS 2021)

Compressive Visual Representations This repository contains the source code for our paper, Compressive Visual Representations. We developed informatio

Google Research 30 Nov 23, 2022
Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)

Learning to Adapt Structured Output Space for Semantic Segmentation Pytorch implementation of our method for adapting semantic segmentation from the s

Yi-Hsuan Tsai 782 Dec 30, 2022
A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

Yutian Liu 2 Jan 29, 2022
Equipped customers with insights about their EVs Hourly energy consumption and helped predict future charging behavior using LSTM model

Equipped customers with insights about their EVs Hourly energy consumption and helped predict future charging behavior using LSTM model. Designed sample dashboard with insights and recommendation for

Yash 2 Apr 07, 2022
Breaching - Breaching privacy in federated learning scenarios for vision and text

Breaching - A Framework for Attacks against Privacy in Federated Learning This P

Jonas Geiping 139 Jan 03, 2023
Simulation-based performance analysis of server-less Blockchain-enabled Federated Learning

Blockchain-enabled Server-less Federated Learning Repository containing the files used to reproduce the results of the publication "Blockchain-enabled

Francesc Wilhelmi 9 Sep 27, 2022
Code for PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

PackNet: https://arxiv.org/abs/1711.05769 Pretrained models are available here: https://uofi.box.com/s/zap2p03tnst9dfisad4u0sfupc0y1fxt Datasets in Py

Arun Mallya 216 Jan 05, 2023
This is code to fit per-pixel environment map with spherical Gaussian lobes, using LBFGS optimization

Spherical Gaussian Optimization This is code to fit per-pixel environment map with spherical Gaussian lobes, using LBFGS optimization. This code has b

41 Dec 14, 2022
Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"

Introduction This repository contains research code for the ACL 2021 paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual

AdapterHub 20 Aug 04, 2022
LRBoost is a scikit-learn compatible approach to performing linear residual based stacking/boosting.

LRBoost is a sckit-learn compatible package for linear residual boosting. LRBoost combines a linear estimator and a non-linear estimator to leverage t

Andrew Patton 5 Nov 23, 2022