MOOSE (Multi-organ objective segmentation) a data-centric AI solution that generates multilabel organ segmentations to facilitate systemic TB whole-person research

Overview

Moose-logo

🦌 About MOOSE

MOOSE (Multi-organ objective segmentation) a data-centric AI solution that generates multilabel organ segmentations to facilitate systemic TB whole-person research.The pipeline is based on nn-UNet and has the capability to segment 120 unique tissue classes from a whole-body 18F-FDG PET/CT image.

🗂 Required folder structure

MOOSE inherently performs batchwise analysis. Once you have all the patients to be analysed in a main directory, MOOSE performs the analysis sequentially. The output folders that will be created by the script itself are highlighted using CAPS. Organising the folder structure is the sole responsibility of the user.

├── main_folder                     # The mother folder that holds all the patient folders (folder name can be anything)
│   ├── patient_folder_1            # Individual patient folder (folder name can be anything)
│       ├── fdgpet                  # The PET folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
│       ├── ct                      # The CT folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
│       ├── INFERENCE               # Auto-generated 
│       ├── MOOSE-TEMP              # Auto-generated
│       ├── LABELS                  # Auto-generated: contains all the generated labels.
│       ├── CT-NIFTI                # Auto-generated 
│       ├── PT-NIFTI                # Auto-generated
│       ├── RISK-ANALYSIS-XXX.xlsx  # Auto-generated: contains the risk-of-error analysis.
├── patient_folder_2    
│       ├── fdgpet                  # The PET folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
│       ├── ct                      # The CT folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
│       ├── INFERENCE               # Auto-generated 
│       ├── MOOSE-TEMP              # Auto-generated
│       ├── LABELS                  # Auto-generated: contains all the generated labels.
│       ├── CT-NIFTI                # Auto-generated 
│       ├── PT-NIFTI                # Auto-generated
│       ├── RISK-ANALYSIS-XXX.xlsx  # Auto-generated: contains the risk-of-error analysis....
├── patient_folder_n
│       ├── fdgpet                  # The PET folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
│       ├── ct                      # The CT folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
│       ├── INFERENCE               # Auto-generated 
│       ├── MOOSE-TEMP              # Auto-generated
│       ├── LABELS                  # Auto-generated: contains all the generated labels.
│       ├── CT-NIFTI                # Auto-generated 
│       ├── PT-NIFTI                # Auto-generated
│       ├── RISK-ANALYSIS-XXX.xlsx  # Auto-generated: contains the risk-of-error analysis.

⛔️ Hard requirements

The entire script has been ONLY tested on Ubuntu linux OS, with the following hardware capabilities:

  • Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz
  • 256 GB of RAM (Very important for total-body datasets)
  • 1 x Nvidia GeForce RTX 3090 Ti We are testing different configurations now, but the RAM (256 GB) seems to be a hard requirement.

⚙️ Installation

Kindly copy the code below and paste it on your ubuntu terminal, the installer should ideally take care of the rest. Also pay attention to the installation process as the FSL installation requires you to answer some questions. A fresh install would approximately take 30 minutes.

git clone https://github.com/LalithShiyam/MOOSE.git
cd MOOSE
source ./moose_installer.sh

NOTE: Do not forget to source the .bashrc file again

source ~/.bashrc

🖥 Usage

  • For running the moose directly from the command-line terminal using the default options - please use the following command. In general, MOOSE performs the error analysis (refer paper) in similarity space and assumes that the given (if given) PET image is static.
#syntax:
moose -f path_to_main_folder 

#example: 
moose -f '/home/kyloren/Documents/main_folder'
  • For notifying the program if the given 18F-FDG PET is static (-dp False) or dynamic (-dp True) and for switching on (-ea True) or off (-ea False) the error analysis error analysis in 'similarity space', use the following command with appropriate syntax.
#syntax:
moose -f path_to_main_folder -ea False -dp True 

#example for performing error analysis for a static PET/CT image: 
moose -f '/home/kyloren/Documents/main_folder' -ea True -dp False

#example for performing error analysis for a dynamic PET/CT image:
moose -f '/home/kyloren/Documents/main_folder' -ea True -dp True

#example for not performing error analysis:
moose -f '/home/kyloren/Documents/main_folder' -ea False -dp False

For the purpose of interactive execution, we have created a notebook version of the script and can be found inside the 'notebooks' folder: ~/MOOSE/MOOSE/notebooks.

📈 Results

  • The multi-label atlas for each subject will be stored in the auto-generated labels folder under the subject's respective directory (refer folder structure). The label-index to region correspondence is stored in the excel sheet: MOOSE-Label-Index-Correspondene-Dual-organs-without-split.xlsx, which can be found inside the ~/MOOSE/MOOSE/similarity-space folder.
  • In addition, an auto-generated Segmentation-Risk-of-error-analysis-XXXX.xlsx file will be created in the individual subject-directory ('XXXX'). The excel file highlights segmentations that might be erroneously segmented. The excel sheet is supposed to serve as an quality control measure.

📖 Citations

🙏 Acknowledgement

This research is supported through an IBM University Cloud Award (https://www.research.ibm.com/university/)

🙋 FAQ

[1] Will MOOSE only work on whole-body 18F-FDG PET/CT datasets?

MOOSE ideally works on whole-body (head to toe) PET/CT datasets, but also works on semi whole-body PET/CT datasets (head to pelvis). Unfortunately, we haven't tested other field-of-views. We will post the evaluations soon.

[2] Will MOOSE only work on multimodal 18F-FDG PET/CT datasets or can it also be applied to CT only? or PET only?

MOOSE automatically infers the modality type using the DICOM header tags. MOOSE builds the entire atlas with 120 tissues if the user provides multimodal 18F-FDG PET/CT datasets. The user can also provide CT only DICOM folder, MOOSE will infer the modality type and segment only the non-cerebral tissues (36/120 tissues) and will not segment the 83 subregions of the brain. MOOSE will definitely not work if only provided with 18F-FDG PET images.

[3] Will MOOSE work on non-DICOM formats?

Unfortunately the current version accepts only DICOM formats. In the future, we will try to enable non-DICOM formats for processing as well.

Comments
  • BUG:IndexError: list index out of range

    BUG:IndexError: list index out of range

    I am running MOOSE in a patient folder with two subfolders for CT and PET under DCIOM format. However, I am getting this error message:

    moose_ct_atlas = ie.segment_ct(ct_file[0], out_dir) File "/export/moose/moose-0.1.0/src/inferenceEngine.py", line 78, in segment_ct out_label = fop.get_files(out_dir, pathlib.Path(nifti_img).stem + '*')[0] IndexError: list index out of range

    Any suggestion, please?

    Thanks,

    opened by Ompsda 14
  • Let users know if environment variables are not loaded

    Let users know if environment variables are not loaded

    Is your feature request related to a problem? Please describe. If the environment variables are not loaded, MOOSE fails silently like so:

    ✔ Converted DICOM images in /home/user/Data/... to NIFTI
    - Only CT data found in folder /home/user/Data/..., MOOSE will construct noncerebral tissue atlas (n=37) based on CT 
    - Initiating CT segmentation protocols
    - CT image to be segmented: /home/user/Data/...._0000\.nii\.gz                            
    ✔ Segmented abdominal organs from /home/user/Data/..._0000.nii.gz                                     
    Traceback (most recent call last):                                                                                                                                                                                 
        File "/usr/local/bin/moose", line 131, in <module>
            ct_atlas = ie.segment_ct(ct_file[0], out_dir)                                                                                                                                                             
        File "/home/user/Code/MOOSE/src/inferenceEngine.py", line 78, in segment_ct                                                                                                                                        
            out_label = fop.get_files(out_dir, pathlib.Path(nifti_img).stem + '*')[0]                
    IndexError: list index out of range
    

    Describe the solution you'd like It would be nice to let the user know that the problem is that the nnUNet_raw_data_base, nnUNet_preprocessed, etc. env variables are not set.

    enhancement 
    opened by chris-clem 8
  • BUG: sitk::ERROR: The file MOOSE-Split-unified-PET-CT-atlas.nii.gz does not exist.

    BUG: sitk::ERROR: The file MOOSE-Split-unified-PET-CT-atlas.nii.gz does not exist.

    Hi,

    I am trying to run MOOSE on a bunch of patients with whole-body CTs. For two of the patient, MOOSE fails with the following error

    ✔ Segmented psoas from /home/user/Data/....IMA_0000.nii.gz                                              
    - Conducting automatic error analysis in similarity space for: /home/user/Data/.../labels/MOOSE-Non-cerebral-tissues-CT-....nii.gz
    Traceback (most recent call last):
      File "/usr/local/bin/moose", line 139, in <module>                                                                                                                                                        
        ea.similarity_space(ct_atlas, sim_space_dir, segmentation_error_stats)                                                                                                                                         
      File "/home/user/Code/MOOSE/src/errorAnalysis.py", line 147, in similarity_space
        shape_parameters = iop.get_shape_parameters(split_atlas)
      File "/home/user/Code/MOOSE/src/imageOp.py", line 86, in get_shape_parameters
        label_img = SimpleITK.Cast(SimpleITK.ReadImage(label_image), SimpleITK.sitkInt32)
      File "/home/user/miniconda3/envs/moose/lib/python3.9/site-packages/SimpleITK/extra.py", line 346, in ReadImage
        return reader.Execute()
      File "/home/user/miniconda3/envs/moose/lib/python3.9/site-packages/SimpleITK/SimpleITK.py", line 8015, in Execute
        return _SimpleITK.ImageFileReader_Execute(self)
    RuntimeError: Exception thrown in SimpleITK ImageFileReader_Execute: /tmp/SimpleITK/Code/IO/src/sitkImageReaderBase.cxx:97:
    sitk::ERROR: The file "/home/user/Data/.../labels/sim_space/similarity-space/MOOSE-Split-unified-PET-CT-atlas.nii.gz" does not exist.
    

    Do you know what could be the problem if the file not existing? It works for the other patients.

    opened by chris-clem 6
  • BUG: Brain label error still persists

    BUG: Brain label error still persists

    Need to manually start again:

    Calculated SUV image for SUV extraction!

    • Brain found in field-of-view of PET/CT data...
    • Cropping brain from PET image using the aligned CT brain mask Traceback (most recent call last): File "/usr/local/bin/moose", line 214, in cropped_pet_brain = iop.crop_image_using_mask(image_to_crop=pet_file[0], File "/home/mz/Documents/Softwares/MOOSE-V.1.0/src/imageOp.py", line 228, in crop_image_using_mask bbox = np.asarray(label_shape_filter.GetBoundingBox(1)) File "/usr/local/lib/python3.8/dist-packages/SimpleITK/SimpleITK.py", line 36183, in GetBoundingBox return _SimpleITK.LabelShapeStatisticsImageFilter_GetBoundingBox(self, label) RuntimeError: Exception thrown in SimpleITK LabelShapeStatisticsImageFilter_GetBoundingBox: /tmp/SimpleITK-build/ITK-prefix/include/ITK-5.2/itkLabelMap.hxx:151: ITK ERROR: LabelMap(0x9547bd0): No label object with label 1.
    bug 
    opened by josefyu 3
  • Feat: Multimoose

    Feat: Multimoose

    Currently MOOSE is running on server configuration. So there is a good chance that the user is using a DGX or so. In that case, it would make sense to fully utilise the capabilities of the hardware. Similar to falcon, moose should run in parallel based on the hardware capabilities.

    enhancement 
    opened by LalithShiyam 3
  • Brain cropping fails with dynamic datasets

    Brain cropping fails with dynamic datasets

    The following error occurred after using Moose with dynamic datasets of Vision lung cancer patients. All other segmentations and SUV extraction properly worked. No error occurred after re-running Moose with the corresponding static dataset.

    Brain found in field-of-view of PET/CT data...                         
    - Cropping brain from PET image using the aligned CT brain mask
    Traceback (most recent call last):
      File "/usr/local/bin/moose", line 215, in <module>
        cropped_pet_brain = iop.crop_image_using_mask(image_to_crop=pet_file[0],
      File "/home/mz/Documents/Softwares/MOOSE/src/imageOp.py", line 237, in crop_image_using_mask
        out_of_bounds = upper_bounds >= img_dim
    ValueError: operands could not be broadcast together with shapes (3,) (4,)
    
    opened by DariaFerrara 2
  • BUG: WSL does not have unzip installed and moose falls silently due to wrong installation.

    BUG: WSL does not have unzip installed and moose falls silently due to wrong installation.

    MOOSE fails with index error when trying to run on WSL, due to wrong installation. There is no moose-files folder created when the algorithm is installed.

    Steps to reproduce the behavior: Install through WSL as described in github.

    Moose-files folder should be created when installed, and moose should run as required.

    Screenshots of the errors: image image

    Windows 11 22H2

    opened by paula-m 1
  • Feat: Batch remove temporary files of faulty processed data folders

    Feat: Batch remove temporary files of faulty processed data folders

    When MOOSE fails to infer the dataset, the command is stopped and the folders are left with temporary files given in this structure:

    Newly created folders: CT, PT, labels, stats, temp and 2 .JSON files.

    In order to clean these datasets and make them executable again, it would be nice to have a command to revert them into their original states. The command which can manually be used is listed here.

    [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name CT -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name PT -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name labels -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name temp -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name stats -exec rm -rf {} \;

    opened by josefyu 1
  • Feat: Find presence of brain using a CNN

    Feat: Find presence of brain using a CNN

    Right now MOOSE breaks when there is no brain in the PET image. The elegant way would be to figure out if there is a brain in the FOV of PET and initiate the segmentation protocols accordingly. It seems to be quite hard to determine if a given image has a brain in the field of view with hand-engineered features. The smartest way would be to generate a MIP or the middle slice of the PET image (if given) and use a 2D CNN based binary classifier for figuring if the brain is in the FOV or not.

    The game plan is the following:

    • [x] Extract the middle slice (coronal plane)

    • [x] Convert it from DICOM to .png and transform the PET intensities between 0-255 (Graylevels)

    • [x] Curate 80 slices (50 PET with no brain, 50 PET with a brain) and perform the training.

    • [x] Implement a 2D CNN binary-classifier (PyTorch <3 fastai)

    • [x] Make sure the data augmentations of the 2D CNN have random cropping

    • [x] Then use the trained model to infer whether a given volume has a brain or not.

    bug enhancement 
    opened by LalithShiyam 1
  • Feat: Create docker image for MOOSEv0.1.0

    Feat: Create docker image for MOOSEv0.1.0

    Problem. Since MOOSE is pretty much used in servers, it might be worthwhile to have a Docker Image for MOOSEv0.1.0.

    Solution Need to make one with the docker image hosted at IBM cloud.

    enhancement 
    opened by LalithShiyam 0
  • BUG: MOOSE fails with dynamic PET

    BUG: MOOSE fails with dynamic PET

    MOOSE fails when presented with a dynamic PET in the latest version. It works as expected with static 3D images.

    MOOSE probably doesn't need to do anything special with the 4D dynamic images, but it should probably still produce the segmented CT output. Additionally, it would be great to have a registration between the CT and the final frame of the PET. Motion correction of the PET could then be performed with FALCON, and mapped back to the CT.

    enhancement 
    opened by aaron-rohn 0
  • Skip patient instead of terminate in case of an error

    Skip patient instead of terminate in case of an error

    Hello,

    would it be possible to skip a patient and process the next one in case of an error (e.g. empty CT dir) and not stop the process?

    And then maybe in the end you get a list of the patient IDs that failed.

    opened by chris-clem 3
  • Manage MOOSE env vars

    Manage MOOSE env vars

    Dear MOOSE team,

    I mentioned the following issue in another issue and wanted to create a new one for it:

    I don't know if adding the env variables to `.bashrc` is the best place to do it. Some users might use zsh and others might use nnUnet seperately.  
    

    Originally posted by @chris-clem in https://github.com/QIMP-Team/MOOSE/issues/42#issuecomment-1286930959.

    As a quick solution, I added a env_vars.sh file in the MOOSE repo dir that I source instead of .bashrc. In the meantime, I have searched how people are handling the problem in general and found the following possibilities:

    1. Create a .env file in the repo dir and load it with python-dotenv as explained here.
    2. Create a .env file in the repo dir and recommend users to use direnv, which then automatically loads the env variables when changing in the MOOSE dir.
    3. Recommend users to create a MOOSE conda environment and enable loading and unloading the env vars when activating/ deactivating the conda environment as described here.

    The downside of 1. is that it requires a new dependency, the downside of 2. that it requires a new program, and the downside of 3. is that it requires conda for managing the environment.

    What do you think is the best option?

    opened by chris-clem 5
  • Feat: Prune/Compress the nnUNet models for performance gains.

    Feat: Prune/Compress the nnUNet models for performance gains.

    Problem

    Inference is a tad bit slow when it comes to large datasets.

    Solution Performance gains can be achieved by using Intel's Neural Compressor: https://github.com/intel/neural-compressor/tree/master/examples/pytorch/image_recognition/3d-unet/quantization/ptq/eager. And Intel has already provided an example on how to do so. So we just need to implement this for getting a lean model (still need to check the performance gains)

    *Alternate solution

    Is to bring in a fast resampling function (torch or others...)

    enhancement 
    opened by LalithShiyam 4
  • Feat: Reduce memory requirement for MOOSE during inference

    Feat: Reduce memory requirement for MOOSE during inference

    Problem MOOSE is based on nnUNet and the current inference takes a lot of memory on total-body datasets (uEXPLORER/QUADRA, upper limit: 256 GB). This is not a normal memory usage for most of the users. The memory usage bottleneck is explained here: https://github.com/MIC-DKFZ/nnUNet/issues/896

    Solution The solution seems to be to find a 'faster/memory efficient' resampling scheme than the skimage resampling scheme. People have already suggested solutions for speed, based on https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html and an elaborate description can be found here: https://github.com/MIC-DKFZ/nnUNet/issues/1093.

    But the memory consumption is still a problem. @dhaberl @Keyn34 : Consider these alternative options of Nvidia's cuCIM cucim.skimage.transform.resize in combination with Dask for block processing (chunks consume way less memory and I have used this for kinetic modelling).

    Impact This would result in a faster inference time and hopefully also obviates memory bottleneck for MOOSE and for any model inference via nnUNet.

    enhancement 
    opened by LalithShiyam 2
  • Analysis request: MOOSE + PET-Parameter extraction of PCA cohort

    Analysis request: MOOSE + PET-Parameter extraction of PCA cohort

    Analysis request for prostate cancer cohort as follows:

    • [x] MOOSE cohort -> Validation of Segmentations by me
      • [ ] Extract PET-Parameters from MOOSEd Segments
    • [x ] Delete all hand drawn PET-Segmentations starting with cubic*
    • [ ] Merge all the remaining Segmentations (pb*, sv*, pln*...) on a patient level by the following convention:
      • [ ] all Segmentations to a Master_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: pb* + sv* -> Prostate_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: dln* + pln* + rln* -> Lymph_node_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: bone* -> Bone_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: adrenal* + liver* + pleura* + lung* + rectum* + skin* + peritoneal* + org* + organ* + psoas* + testis* + lung* + cavern* -> Organ_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
    Analysis request 
    opened by KCHK1234 8
  • Bug: Nasal mucosa as skeletal muscle

    Bug: Nasal mucosa as skeletal muscle

    In case of mucosal congestion in the nasal cavity and paranasal sinuses -> missclassification as skeletal muscles. This appears often but I think the effects are minor, hence MINOR bug. All instances recorded

    bug 
    opened by KCHK1234 2
Releases(moose-v0.1.4)
  • moose-v0.1.4(Oct 22, 2022)

    What's Changed

    • Feature: Adding checks for environment variables by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/43
    • Bug: nnUNet broke suddenly due to version issues, now MOOSE installation file will always build the latest version of nnUNet from the git repo (https://github.com/MIC-DKFZ/nnUNet/issues/1132)! Please re-install MOOSE, if MOOSE doesn't work due to this bug.

    Full Changelog: https://github.com/QIMP-Team/MOOSE/compare/moose-v0.1.3...moose-v0.1.4

    Source code(tar.gz)
    Source code(zip)
  • moose-v0.1.3(Jul 16, 2022)

    What's Changed

    • Created CODE_OF_CONDUCT.md by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/32
    • Updated README.md by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/35
    • Created a docker image for MOOSEv0.1.0 by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/37

    Full Changelog: https://github.com/QIMP-Team/MOOSE/compare/moose-v0.1.2...moose-v0.1.3

    Source code(tar.gz)
    Source code(zip)
  • moose-v0.1.2(Jul 7, 2022)

  • moose-v0.1.1-rc(Jun 27, 2022)

    What's Changed

    • BUG: Fixed moose_uninstaller to remove env variables. by @LalithShiyam in https://github.com/QIMP-Team/MOOSE-v0.1.0/pull/28

    Full Changelog: https://github.com/QIMP-Team/MOOSE-v0.1.0/compare/moose-v0.1.0-rc...moose-v0.1.1-rc

    Source code(tar.gz)
    Source code(zip)
  • moose-v0.1.0-rc(Jun 27, 2022)

    What's Changed

    • The source code has been made modular to ensure maintainability.
    • MOOSE now generates log files for each run, which makes it easier to debug.
    • The output messages are much cleaner and organised, with clean progress bars.
    • FSL dependency is completely removed. We use nibabel now.
    • MOOSE now creates a stats folder which contains the following metrics in a '.csv' file:
    • SUV (mean, max, std, max, min) values, if PET images are provided
    • HU units (mean, max, std, max, min)
    • Volume metrics from CT
    • MOOSE now has a binary classifier (fastai-based) which figures out if a given PET volume has a brain in the field-of-view, works most of the times.
    • Automated affine alignment between PET/CT, if both images are present. Just to ensure spatial alignment.

    New Contributors

    • @LalithShiyam made their first contribution in https://github.com/QIMP-Team/MOOSE-v0.1.0/pull/4
    • @Keyn34 made their first contribution in https://github.com/QIMP-Team/MOOSE-v0.1.0/pull/11

    Full Changelog: https://github.com/QIMP-Team/MOOSE-v0.1.0/commits/moose-v0.1.0-rc

    ** To-do:

    • [ ] Docker image for the current version
    Source code(tar.gz)
    Source code(zip)
Owner
QIMP team
Our vision is to enable a wider adoption of fully-quantitative molecular image information in the context of personalized medicine.
QIMP team
🥇 LG-AI-Challenge 2022 1위 솔루션 입니다.

LG-AI-Challenge-for-Plant-Classification Dacon에서 진행된 농업 환경 변화에 따른 작물 병해 진단 AI 경진대회 에 대한 코드입니다. (colab directory에 코드가 잘 정리 되어있습니다.) Requirements python

siwooyong 10 Jun 30, 2022
Transformer part of 12th place solution in Riiid! Answer Correctness Prediction

kaggle_riiid Transformer part of 12th place solution in Riiid! Answer Correctness Prediction. Please see here for more information. Execution You need

Sakami Kosuke 2 Apr 23, 2022
TensorFlow, PyTorch and Numpy layers for generating Orthogonal Polynomials

OrthNet TensorFlow, PyTorch and Numpy layers for generating multi-dimensional Orthogonal Polynomials 1. Installation 2. Usage 3. Polynomials 4. Base C

Chuan 29 May 25, 2022
SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolutional Networks

SalFBNet This repository includes Pytorch implementation for the following paper: SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolu

12 Aug 12, 2022
Open AI's Python library

OpenAI Python Library The OpenAI Python library provides convenient access to the OpenAI API from applications written in the Python language. It incl

Pavan Ananth Sharma 3 Jul 10, 2022
Outlier Exposure with Confidence Control for Out-of-Distribution Detection

OOD-detection-using-OECC This repository contains the essential code for the paper Outlier Exposure with Confidence Control for Out-of-Distribution De

Nazim Shaikh 64 Nov 02, 2022
This is a official repository of SimViT.

SimViT This is a official repository of SimViT. We will open our models and codes about object detection and semantic segmentation soon. Our code refe

ligang 57 Dec 15, 2022
A pyparsing-based library for parsing SOQL statements

CONTRIBUTORS WANTED!! Installation pip install python-soql-parser or, with poetry poetry add python-soql-parser Usage from python_soql_parser import p

Kicksaw 0 Jun 07, 2022
Multiple style transfer via variational autoencoder

ST-VAE Multiple style transfer via variational autoencoder By Zhi-Song Liu, Vicky Kalogeiton and Marie-Paule Cani This repo only provides simple testi

13 Oct 29, 2022
Lightweight stereo matching network based on MobileNetV1 and MobileNetV2

MobileStereoNet: Towards Lightweight Deep Networks for Stereo Matching

Cognitive Systems Research Group 139 Nov 30, 2022
SSD-based Object Detection in PyTorch

SSD-based Object Detection in PyTorch 서강대학교 현대모비스 SW 프로그램에서 진행한 인공지능 프로젝트입니다. Jetson nano를 이용해 pre-trained network를 fine tuning시켜 차량 및 신호등 인식을 구현하였습니다

Haneul Kim 1 Nov 16, 2021
An University Project of Quera Web Crawling.

WebCrawlerProject An University Project of Quera Web Crawling. خزشگر اینستاگرام در این پروژه شما باید با استفاده از کتابخانه های زیر یک خزشگر اینستاگر

Mahdi 3 Aug 12, 2022
Predicting the duration of arrival delays for commercial flights.

Flight Delay Prediction Our objective is to predict arrival delays of commercial flights. According to the US Department of Transportation, about 21%

Jordan Silke 1 Jan 11, 2022
VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.

What's New Below we share, in reverse chronological order, the updates and new releases in VISSL. All VISSL releases are available here. [Oct 2021]: V

Meta Research 2.9k Jan 07, 2023
Code used for the results in the paper "ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning"

Code used for the results in the paper "ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning" Getting started Prerequisites CUD

70 Dec 02, 2022
The authors' implementation of Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations

Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations This is the authors' implementation of Unsupervised Adversarial Learning of

Dwango Media Village 140 Dec 07, 2022
Pytorch code for "DPFM: Deep Partial Functional Maps" - 3DV 2021 (Oral)

DPFM Code for "DPFM: Deep Partial Functional Maps" - 3DV 2021 (Oral) Installation This implementation runs on python = 3.7, use pip to install depend

Souhaib Attaiki 29 Oct 03, 2022
A pytorch implementation of faster RCNN detection framework (Use detectron2, it's a masterpiece)

Notice(2019.11.2) This repo was built back two years ago when there were no pytorch detection implementation that can achieve reasonable performance.

Ruotian(RT) Luo 1.8k Jan 01, 2023
My solution for the 7th place / 245 in the Umoja Hack 2022 challenge

Umoja Hack 2022 : Insurance Claim Challenge My solution for the 7th place / 245 in the Umoja Hack 2022 challenge Umoja Hack Africa is a yearly hackath

Souames Annis 17 Jun 03, 2022
Code for the CVPR2022 paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity"

Introduction This is an official release of the paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity" (arxiv link). Abstrac

Leo 21 Nov 23, 2022