TorchXRayVision: A library of chest X-ray datasets and models.

Overview

torchxrayvision

A library for chest X-ray datasets and models. Including pre-trained models.

( 🎬 promo video about the project)

Motivation: While there are many publications focusing on the prediction of radiological and clinical findings from chest X-ray images much of this work is inaccessible to other researchers.

  • In the case of researchers addressing clinical questions it is a waste of time for them to train models from scratch. To address this, TorchXRayVision provides pre-trained models which are trained on large cohorts of data and enables 1) rapid analysis of large datasets 2) feature reuse for few-shot learning.
  • In the case of researchers developing algorithms it is important to robustly evaluate models using multiple external datasets. Metadata associated with each dataset can vary greatly which makes it difficult to apply methods to multiple datasets. TorchXRayVision provides access to many datasets in a uniform way so that they can be swapped out with a single line of code. These datasets can also be merged and filtered to construct specific distributional shifts for studying generalization.

This code is still under development

Twitter: @torchxrayvision

Getting started

pip install torchxrayvision

import torchxrayvision as xrv

These are default pathologies:

xrv.datasets.default_pathologies 

['Atelectasis',
 'Consolidation',
 'Infiltration',
 'Pneumothorax',
 'Edema',
 'Emphysema',
 'Fibrosis',
 'Effusion',
 'Pneumonia',
 'Pleural_Thickening',
 'Cardiomegaly',
 'Nodule',
 'Mass',
 'Hernia',
 'Lung Lesion',
 'Fracture',
 'Lung Opacity',
 'Enlarged Cardiomediastinum']

Models (demo notebook)

Specify weights for pretrained models (currently all DenseNet121) Note: Each pretrained model has 18 outputs. The all model has every output trained. However, for the other weights some targets are not trained and will predict randomly becuase they do not exist in the training dataset. The only valid outputs are listed in the field {dataset}.pathologies on the dataset that corresponds to the weights.

## 224x224 models
model = xrv.models.DenseNet(weights="densenet121-res224-all")
model = xrv.models.DenseNet(weights="densenet121-res224-rsna") # RSNA Pneumonia Challenge
model = xrv.models.DenseNet(weights="densenet121-res224-nih") # NIH chest X-ray8
model = xrv.models.DenseNet(weights="densenet121-res224-pc") # PadChest (University of Alicante)
model = xrv.models.DenseNet(weights="densenet121-res224-chex") # CheXpert (Stanford)
model = xrv.models.DenseNet(weights="densenet121-res224-mimic_nb") # MIMIC-CXR (MIT)
model = xrv.models.DenseNet(weights="densenet121-res224-mimic_ch") # MIMIC-CXR (MIT)

# 512x512 models
model = xrv.models.ResNet(weights="resnet50-res512-all")

# DenseNet121 from JF Healthcare for the CheXpert competition
model = xrv.baseline_models.jfhealthcare.DenseNet() 

# Official Stanford CheXpert model
model = xrv.baseline_models.chexpert.DenseNet()

Benchmarks of the modes are here: BENCHMARKS.md

The performance of some of the models can be seen in this paper arxiv.org/abs/2002.02497.

Autoencoders

You can also load a pre-trained autoencoder that is trained on the PadChest, NIH, CheXpert, and MIMIC datasets.

ae = xrv.autoencoders.ResNetAE(weights="101-elastic")
z = ae.encode(image)
image2 = ae.decode(z)

Datasets (demo notebook)

Only stats for PA/AP views are shown. Datasets may include more.

transform = torchvision.transforms.Compose([xrv.datasets.XRayCenterCrop(),
                                            xrv.datasets.XRayResizer(224)])

d_kaggle = xrv.datasets.RSNA_Pneumonia_Dataset(imgpath="path to stage_2_train_images_jpg",
                                       transform=transform)
                
d_chex = xrv.datasets.CheX_Dataset(imgpath="path to CheXpert-v1.0-small",
                                   csvpath="path to CheXpert-v1.0-small/train.csv",
                                   transform=transform)

d_nih = xrv.datasets.NIH_Dataset(imgpath="path to NIH images")

d_nih2 = xrv.datasets.NIH_Google_Dataset(imgpath="path to NIH images")

d_pc = xrv.datasets.PC_Dataset(imgpath="path to image folder")


d_covid19 = xrv.datasets.COVID19_Dataset() # specify imgpath and csvpath for the dataset

d_siim = xrv.datasets.SIIM_Pneumothorax_Dataset(imgpath="dicom-images-train/",
                                                csvpath="train-rle.csv")

d_vin = xrv.datasets.VinBrain_Dataset(imgpath=".../train",
                                      csvpath=".../train.csv")

National Library of Medicine Tuberculosis Datasets paper

d_nlmtb = xrv.datasets.NLMTB_Dataset(imgpath="path to MontgomerySet or ChinaSet_AllFiles")

Using MontgomerySet data:
NLMTB_Dataset num_samples=138 views=['PA']
{'Tuberculosis': {0: 80, 1: 58}}
or using ChinaSet_AllFiles data:
NLMTB_Dataset num_samples=662 views=['PA', 'AP']
{'Tuberculosis': {0: 326, 1: 336}}

Dataset fields

Each dataset contains a number of fields. These fields are maintained when xrv.datasets.Subset_Dataset and xrv.datasets.Merge_Dataset are used.

Each dataset has a .pathologies field which is a list of the pathologies contained in this dataset that will be contained in the .labels field ].

Each dataset has a .labels field which contains a 1,0, or NaN for each label defined in .pathologies.

Each dataset has a .csv field which corresponds to pandas DataFrame of the metadata csv file that comes with the data. Each row aligns with the elements of the dataset so indexing using .iloc will work.

If possible, each dataset's .csv will have some common fields of the csv. These will be aligned when The list is as follows:

csv.patientid A unique id that will uniqely identify samples in this dataset

csv.offset_day_int An integer time offset for the image in the unit of days. This is expected to be for relative times and has no absolute meaning although for some datasets it is the epoch time.

Dataset tools

relabel_dataset will align labels to have the same order as the pathologies argument.

xrv.datasets.relabel_dataset(xrv.datasets.default_pathologies , d_nih) # has side effects

specify a subset of views (demo notebook)

d_kaggle = xrv.datasets.RSNA_Pneumonia_Dataset(imgpath="...",
                                               views=["PA","AP","AP Supine"])

specify only 1 image per patient

d_kaggle = xrv.datasets.RSNA_Pneumonia_Dataset(imgpath="...",
                                               unique_patients=True)

obtain summary statistics per dataset

d_chex = xrv.datasets.CheX_Dataset(imgpath="CheXpert-v1.0-small",
                                   csvpath="CheXpert-v1.0-small/train.csv",
                                 views=["PA","AP"], unique_patients=False)

CheX_Dataset num_samples=191010 views=['PA', 'AP']
{'Atelectasis': {0.0: 17621, 1.0: 29718},
 'Cardiomegaly': {0.0: 22645, 1.0: 23384},
 'Consolidation': {0.0: 30463, 1.0: 12982},
 'Edema': {0.0: 29449, 1.0: 49674},
 'Effusion': {0.0: 34376, 1.0: 76894},
 'Enlarged Cardiomediastinum': {0.0: 26527, 1.0: 9186},
 'Fracture': {0.0: 18111, 1.0: 7434},
 'Lung Lesion': {0.0: 17523, 1.0: 7040},
 'Lung Opacity': {0.0: 20165, 1.0: 94207},
 'Pleural Other': {0.0: 17166, 1.0: 2503},
 'Pneumonia': {0.0: 18105, 1.0: 4674},
 'Pneumothorax': {0.0: 54165, 1.0: 17693},
 'Support Devices': {0.0: 21757, 1.0: 99747}}

Pathology masks (demo notebook)

Masks are available in the following datasets:

xrv.datasets.RSNA_Pneumonia_Dataset() # for Lung Opacity
xrv.datasets.SIIM_Pneumothorax_Dataset() # for Pneumothorax
xrv.datasets.NIH_Dataset() # for Cardiomegaly, Mass, Effusion, ...

Example usage:

d_rsna = xrv.datasets.RSNA_Pneumonia_Dataset(imgpath="stage_2_train_images_jpg", 
                                            views=["PA","AP"],
                                            pathology_masks=True)
                                            
# The has_masks column will let you know if any masks exist for that sample
d_rsna.csv.has_masks.value_counts()
False    20672
True      6012       

# Each sample will have a pathology_masks dictionary where the index 
# of each pathology will correspond to a mask of that pathology (if it exists).
# There may be more than one mask per sample. But only one per pathology.
sample["pathology_masks"][d_rsna.pathologies.index("Lung Opacity")]

it also works with data_augmentation if you pass in data_aug=data_transforms to the dataloader. The random seed is matched to align calls for the image and the mask.

Distribution shift tools (demo notebook)

The class xrv.datasets.CovariateDataset takes two datasets and two arrays representing the labels. The samples will be returned with the desired ratio of images from each site. The goal here is to simulate a covariate shift to make a model focus on an incorrect feature. Then the shift can be reversed in the validation data causing a catastrophic failure in generalization performance.

ratio=0.0 means images from d1 will have a positive label ratio=0.5 means images from d1 will have half of the positive labels ratio=1.0 means images from d1 will have no positive label

With any ratio the number of samples returned will be the same.

d = xrv.datasets.CovariateDataset(d1 = # dataset1 with a specific condition
                                  d1_target = #target label to predict,
                                  d2 = # dataset2 with a specific condition
                                  d2_target = #target label to predict,
                                  mode="train", # train, valid, and test
                                  ratio=0.9)

Citation

Joseph Paul Cohen, Joseph Viviano, Paul Morrison, Rupert Brooks, Mohammad Hashir, Hadrien Bertrand 
TorchXRayVision: A library of chest X-ray datasets and models. 
https://github.com/mlmed/torchxrayvision, 2020

@article{Cohen2020xrv,
author = {Cohen, Joseph Paul and Viviano, Joseph and Morrison, Paul and Brooks, Rupert and Hashir, Mohammad and Bertrand, Hadrien},
journal = {https://github.com/mlmed/torchxrayvision},
title = {{TorchXRayVision: A library of chest X-ray datasets and models}},
url = {https://github.com/mlmed/torchxrayvision},
year = {2020}
}


and this paper https://arxiv.org/abs/2002.02497

Joseph Paul Cohen and Mohammad Hashir and Rupert Brooks and Hadrien Bertrand
On the limits of cross-domain generalization in automated X-ray prediction. 
Medical Imaging with Deep Learning 2020 (Online: https://arxiv.org/abs/2002.02497)

@inproceedings{cohen2020limits,
  title={On the limits of cross-domain generalization in automated X-ray prediction},
  author={Cohen, Joseph Paul and Hashir, Mohammad and Brooks, Rupert and Bertrand, Hadrien},
  booktitle={Medical Imaging with Deep Learning},
  year={2020},
  url={https://arxiv.org/abs/2002.02497}
}

Supporters/Sponsors

CIFAR (Canadian Institute for Advanced Research)

Mila, Quebec AI Institute, University of Montreal

Stanford University's Center for Artificial Intelligence in Medicine & Imaging

Carestream Health

Comments
  • taking gradients through the 'all' densenet

    taking gradients through the 'all' densenet

    Hi,

    I am trying to plug your 'all' densenet (in eval mode + fixed weights) to my generative pipeline. However I'm getting errors taking gradients. I saw that you updated the package with the 'op' util we discussed here a couple of weeks ago; so I updated the package myself as well. Now I'm getting a different error, which is:

    Warning: Error detected in AddBackward0. Traceback of forward call that caused the error:
      ...(some prints)...
      File "/home/ubuntu/DR-VAE/drvae/model/vae.py", line 233, in lossfun
        return vae_loss + disc_loss
     (print_stack at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:60)
    Traceback (most recent call last):
    File "/home/ubuntu/DR-VAE/drvae/model/train.py", line 297, in train_epoch_xraydata
        loss.backward()
      File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/tensor.py", line 198, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph)
      File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/autograd/__init__.py", line 100, in backward
        allow_unreachable=True)  # allow_unreachable flag
    **RuntimeError: Function AddBackward0 returned an invalid gradient at index 1 - expected type TensorOptions(dtype=float, device=cpu, layout=Strided, requires_grad=false) but got TensorOptions(dtype=float, device=cuda:0, layout=Strided, requires_grad=false) (validate_outputs at /pytorch/torch/csrc/autograd/engine.cpp:484)**
    frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x46 (0x7fbd188b5536 in /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libc10.so)
    frame #1: <unknown function> + 0x2d84224 (0x7fbd57f1e224 in /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
    frame #2: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&) + 0x548 (0x7fbd57f1fd58 in /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
    frame #3: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&, bool) + 0x3d2 (0x7fbd57f21ce2 in /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
    frame #4: torch::autograd::Engine::thread_init(int) + 0x39 (0x7fbd57f1a359 in /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
    frame #5: torch::autograd::python::PythonEngine::thread_init(int) + 0x38 (0x7fbd646594d8 in /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
    frame #6: <unknown function> + 0xee0f (0x7fbd65246e0f in /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so)
    frame #7: <unknown function> + 0x76ba (0x7fbd683996ba in /lib/x86_64-linux-gnu/libpthread.so.0)
    frame #8: clone + 0x6d (0x7fbd680cf41d in /lib/x86_64-linux-gnu/libc.so.6)
    

    I'm not sure why it expects cpu there.

    opened by danbider 14
  • Bad performance when making predictions with the CheXpert model

    Bad performance when making predictions with the CheXpert model

    Hi, thanks for making your work available! I'm trying to do a Fairness analysis and as a first step I need to obtain the model's predictions. I'm focusing on the CheXpert dataset and the CheXpert model. I reproduced the same split (seed=0) as you do, and then made predictions for the Test set using your CheXpert model. Computing AUC and other metrics on the Test set results to quite mediocre performance, far worse from what is reported on the paper. So I was wondering if I'm missing something big.

    Let me note here that I am using the 'small' version of CheXpert (same as you do) and that I am transforming the Test set data when I create the CheX_Dataset object in the following way: image

    Your feedback on what I might be doing wrong would be extremely helpful!

    opened by lkourti 10
  • The output of the kaggle densenet model

    The output of the kaggle densenet model

    Hi, I started playing with your cool package and wanted to make sure I follow. How do I work with the output of the final linear layer of the kaggle model?

    if model = xrv.models.DenseNet(weights="kaggle") d_kaggle = xrv.datasets.Kaggle_Dataset(..)

    and we push one image forward, sample = d_kaggle[92] out = model(torch.tensor(sample['PA']).unsqueeze(0))

    and given that the relevant labels in d_kaggle.pathologies appear in indices 8 and 16 of xrv.datasets.default_pathologies

    with out_softmax = torch.nn.functional.softmax(out[0,[8,16]], dim=0) (or sigmoid for that matter) I always get out_softmax = [~x, ~x] for every example that I've pushed forward, regardless of the label.

    opened by danbider 10
  • not match between CheX_Dataset and model with weights=

    not match between CheX_Dataset and model with weights="densenet121-res224-chex"

    I found that the summary of CheX_Dataset has 13 classes of disease ,but in the head of model.py ,model with weights="densenet121-res224-chex" provide 11 classes of disease, I think this is not matched. And as same as "mimic_ch" and "mimic_nb". this means the model will provide 7 useless dimensions for each sample. Is there any kind of disease missing?

    opened by catfish132 9
  • data loading

    data loading

    Hi,

    I use your kaggle dataset object (including all data points) and define a data loader to train my model on a AWS EC2 instance with a GPU. I am experiencing a volatile GPU-utility that flickers between 0% and 90%, but mostly at 0%. I tried to make my dataloader as efficient

        train_loader = torch.utils.data.DataLoader(train_data, 
            batch_size=batch_size, num_workers=8, pin_memory=True, shuffle=True)
    

    But I'm also double checking for other places in my code that might slow things down.

    I'm curious what was your experience of training using these datasets? did you encounter anything similar?

    opened by danbider 9
  • xrv.models.DenseNet(weights=

    xrv.models.DenseNet(weights="all").cuda() problem in new update

    Hello Thanks for great repo

    It seems you recently updated your codes, This error happens when i try to load pretrained net:

    xrv.models.DenseNet(weights="all").cuda()

    /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in getattr(self, name) 770 return modules[name] 771 raise ModuleAttributeError("'{}' object has no attribute '{}'".format( --> 772 type(self).name, name)) 773 774 def setattr(self, name: str, value: Union[Tensor, 'Module']) -> None:

    ModuleAttributeError: 'BatchNorm2d' object has no attribute '_non_persistent_buffers_set'

    opened by dara1400 8
  • Same Output for each image

    Same Output for each image

    I've tried to use the pertained models on Images from MIMIC-CXR. For each model I get the same probabilities for each image, even if they are very different from each other.

    For example: image image

    Both images have been center cropped and resized to 224x224 resolution. I used the following code to get outputs from the model:

    model = xrv.models.DenseNet(weights="densenet121-res224-all")
    
    with torch.no_grad(): 
        out0 = model(inputs['image'][0].unsqueeze(0))
        out1 = model(inputs['image'][1].unsqueeze(0))
    

    Both out0 and out1 equal tensor([[0.5995, 0.5762, 0.5281, 0.5486, 0.5477, 0.5251, 0.5390, 0.6766, 0.5525, 0.5918, 0.6122, 0.5254, 0.5949, 0.3159, 0.5500, 0.5202, 0.6688, 0.6592]])

    The same problem occurs with other pertained models like the one for MIMIC-CXR as well, only with different probabilities, but still the same ones for each respective image.

    opened by mohkoh19 6
  • Console always prints dataset statistics

    Console always prints dataset statistics

    Hi everyone,

    After having load the PadChest dataset my console prints its summary statistics for all executed commands

    Looks like this:

    {'Air Trapping': {0.0: 59407, 1.0: 2285}, 'Aortic Atheromatosis': {0.0: 60419, 1.0: 1273}, 'Aortic Elongation': {0.0: 56611, 1.0: 5081}, 'Atelectasis': {0.0: 59273, 1.0: 2419}, 'Bronchiectasis': {0.0: 60821, 1.0: 871}, 'Cardiomegaly': {0.0: 56305, 1.0: 5387}, 'Consolidation': {0.0: 61217, 1.0: 475}, 'Costophrenic Angle Blunting': {0.0: 59587, 1.0: 2105}, 'Edema': {0.0: 61584, 1.0: 108}, 'Effusion': {0.0: 60067, 1.0: 1625}, 'Emphysema': {0.0: 61146, 1.0: 546}, 'Fibrosis': {0.0: 61351, 1.0: 341}, 'Flattened Diaphragm': {0.0: 61379, 1.0: 313}, 'Fracture': {0.0: 60030, 1.0: 1662}, 'Granuloma': {0.0: 60135, 1.0: 1557}, 'Hemidiaphragm Elevation': {0.0: 60806, 1.0: 886}, 'Hernia': {0.0: 60704, 1.0: 988}, 'Hilar Enlargement': {0.0: 58875, 1.0: 2817}, 'Infiltration': {0.0: 57383, 1.0: 4309}, 'Mass': {0.0: 61186, 1.0: 506}, 'Nodule': {0.0: 59502, 1.0: 2190}, 'Pleural_Thickening': {0.0: 59617, 1.0: 2075}, 'Pneumonia': {0.0: 59782, 1.0: 1910}, 'Pneumothorax': {0.0: 61595, 1.0: 97}, 'Scoliosis': {0.0: 57761, 1.0: 3931}, 'Support Devices': {0.0: 60575, 1.0: 1117}, 'Tube': {0.0: 61467, 1.0: 225}, 'Tuberculosis': {0.0: 61290, 1.0: 402}}

    How can I deactivate this?

    Thanks a lot in advance!

    opened by pat-rig 5
  • num_classes no longer changing the classifier

    num_classes no longer changing the classifier

    Today I noticed that all predictions by the "All" model were "16" despite the dataset only giving labels between 0 and 3. First I thought it was a mistake on my end but I knew that I didn't change anything today. I checked the code anyways and found out that setting num_classes to 4 still gives a model.classifier of Linear(in_features=1024, out_features=18, bias=True).

    Using normal Densenet121 from torchvision models works fine so it is not an issue with my datamodule or data. The very strange part is that this changed all of a sudden while it was searching through hyperparameters with optuna. First 3 or 4 trials were fine and then all of a sudden I see label "4" and even "5" on the subsequent runs.

    opened by ihamdi 5
  • Finetune pretrained models on different dataset

    Finetune pretrained models on different dataset

    Hi all and thank you for developing this library. In the readme it's said that the pretrained models can be used for feature reuse for few-shot learning. Does that mean that it's possible to take a pretrained model and finetune it for a couple of epochs on a different dataset? I spent some time to explore the repo, but I couldn't find anything related to that. I would truly grateful if you could help me on that. Thanks

    opened by matteopilotto 5
  • Example Results with Pretrained Autoencoder

    Example Results with Pretrained Autoencoder

    Hi,

    Are there any example results with your pretrained autoencoder? I tried it with CheXpert but its decoded images were mostly gray. Wondering if there is an example and I might be missing some preprocessing? I am using the normalize function provided in the XRV repo to rescale my images.

    Example on CheXpert:

    image

    Thanks!

    opened by Htermotto 4
  • Any dataset tool to transform original big .jpg into small one

    Any dataset tool to transform original big .jpg into small one

    I have download MIMIC-CXR dataset and I want to train a model by myself . But the original images are too big,the total volume is 500GB. Disk IO will be a bottleneck. So is there any script to transform the images into small ones?

    opened by catfish132 1
  • Example notebook doesn't work and has poor results when fixed

    Example notebook doesn't work and has poor results when fixed

    Hi,

    In using Torch XRV we started with the example notebook. However, it has multiple issues such that it does not run to begin with. We were able to fix it but the results of the pretrained weights seem poor on the NIH dataset. Additionally we think there is a bug in the code that will erroneously apply sigmoid twice if apply_sigmoid=True link.

    I've opened a PR with our changes that fix the notebook here but we would appreciate input on the model performance.

    EDIT: I cleared the notebook in the PR so there wouldn't be a lot of output, but essentially what we are seeing is very low precision on NIH res224. Here are the metrics we get generated from running the above notebook:

    image

    Thanks!

    opened by Htermotto 4
  • Sharing models through Hugging Face Hub

    Sharing models through Hugging Face Hub

    Hi TorchXRayVision team!

    This project is amazing! Several Hugging Face followers and members of the "ML for healthcare" community recommended that we contacted you 🤗. I see you host and share models/datasets with your own server. Would you be interested in sharing your models in the Hugging Face Hub?

    This integration would allow you to freely download/upload models, and make your work more accessible and visible to the rest of the ML community. We can help you set up a TorchXRayVision organization (examples, Facebook AI y Stanford NLP).

    Creating the repos and adding new models should be a relatively straightforward process. This is a step-by-step guide explaining the process in case you're interested. Please let us know if you would be interested and if you have any questions.

    Some of the benefits of sharing your models through the Hub would be:

    • Presence in the HF Hub might lower the entry of barrier to TorchXRayVision as well as increase its visibility.
      • Repos provide useful metadata about their tasks, languages, metrics, etc that make them discoverable
    • versioning, commit history, and diffs.
    • multiple features from TensorBoard visualizations, PapersWithCode integration, and more.

    Additionally, we have a library to programmatically access repositories (both downloading pretrained models and pushing, with a lot of nice things such as filtering, caching, etc). If we want to try out this integration, I would suggest you add one or two models manually and then use the huggingface_hub library to implement downloading those models programmatically from torchxrayvision. You might want to check our documentation to read more about it.

    Relevant references:

    Happy to hear your thoughts,

    Omar and the Hugging Face team (cc @osanseviero @abidlabs )

    opened by omarespejel 0
Owner
Machine Learning and Medicine Lab
Machine Learning and Medicine Lab
TEA: A Sequential Recommendation Framework via Temporally Evolving Aggregations

TEA: A Sequential Recommendation Framework via Temporally Evolving Aggregations Requirements python 3.6 torch 1.9 numpy 1.19 Quick Start The experimen

DMIRLAB 4 Oct 16, 2022
NeurIPS 2021 paper 'Representation Learning on Spatial Networks' code

Representation Learning on Spatial Networks This repository is the official implementation of Representation Learning on Spatial Networks. Training Ex

13 Dec 29, 2022
[AAAI 2022] Sparse Structure Learning via Graph Neural Networks for Inductive Document Classification

Sparse Structure Learning via Graph Neural Networks for inductive document classification Make graph dataset create co-occurrence graph for datasets.

16 Dec 22, 2022
Implementation for On Provable Benefits of Depth in Training Graph Convolutional Networks

Implementation for On Provable Benefits of Depth in Training Graph Convolutional Networks Setup This implementation is based on PyTorch = 1.0.0. Smal

Weilin Cong 8 Oct 28, 2022
Unsupervised Domain Adaptation for Nighttime Aerial Tracking (CVPR2022)

Unsupervised Domain Adaptation for Nighttime Aerial Tracking (CVPR2022) Junjie Ye, Changhong Fu, Guangze Zheng, Danda Pani Paudel, and Guang Chen. Uns

Intelligent Vision for Robotics in Complex Environment 91 Dec 30, 2022
Geometric Deep Learning Extension Library for PyTorch

Documentation | Paper | Colab Notebooks | External Resources | OGB Examples PyTorch Geometric (PyG) is a geometric deep learning extension library for

Matthias Fey 16.5k Jan 08, 2023
3rd Place Solution for ICCV 2021 Workshop SSLAD Track 3A - Continual Learning Classification Challenge

Online Continual Learning via Multiple Deep Metric Learning and Uncertainty-guided Episodic Memory Replay 3rd Place Solution for ICCV 2021 Workshop SS

Rifki Kurniawan 6 Nov 10, 2022
Rainbow: Combining Improvements in Deep Reinforcement Learning

Rainbow Rainbow: Combining Improvements in Deep Reinforcement Learning [1]. Results and pretrained models can be found in the releases. DQN [2] Double

Kai Arulkumaran 1.4k Dec 29, 2022
[PAMI 2020] Show, Match and Segment: Joint Weakly Supervised Learning of Semantic Matching and Object Co-segmentation

Show, Match and Segment: Joint Weakly Supervised Learning of Semantic Matching and Object Co-segmentation This repository contains the source code for

Yun-Chun Chen 60 Nov 25, 2022
Implementation for "Seamless Manga Inpainting with Semantics Awareness" (SIGGRAPH 2021 issue)

Seamless Manga Inpainting with Semantics Awareness [SIGGRAPH 2021](To appear) | Project Website | BibTex Introduction: Manga inpainting fills up the d

101 Jan 01, 2023
nnFormer: Interleaved Transformer for Volumetric Segmentation Code for paper "nnFormer: Interleaved Transformer for Volumetric Segmentation "

nnFormer: Interleaved Transformer for Volumetric Segmentation Code for paper "nnFormer: Interleaved Transformer for Volumetric Segmentation ". Please

jsguo 610 Dec 28, 2022
Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted)

NLOS-OT Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted) Description In this reposit

Ruixu Geng(耿瑞旭) 16 Dec 16, 2022
An implementation of Video Frame Interpolation via Adaptive Separable Convolution using PyTorch

This work has now been superseded by: https://github.com/sniklaus/revisiting-sepconv sepconv-slomo This is a reference implementation of Video Frame I

Simon Niklaus 984 Dec 16, 2022
Official PyTorch implementation of "Evolving Search Space for Neural Architecture Search"

Evolving Search Space for Neural Architecture Search Usage Install all required dependencies in requirements.txt and replace all ..path/..to in the co

Yuanzheng Ci 10 Oct 24, 2022
PyTorch implementation of MulMON

MulMON This repository contains a PyTorch implementation of the paper: Learning Object-Centric Representations of Multi-object Scenes from Multiple Vi

NanboLi 16 Nov 03, 2022
Implementation of SwinTransformerV2 in TensorFlow.

SwinTransformerV2-TensorFlow A TensorFlow implementation of SwinTransformerV2 by Microsoft Research Asia, based on their official implementation of Sw

Phan Nguyen 2 May 30, 2022
[cvpr22] Perturbed and Strict Mean Teachers for Semi-supervised Semantic Segmentation

PS-MT [cvpr22] Perturbed and Strict Mean Teachers for Semi-supervised Semantic Segmentation by Yuyuan Liu, Yu Tian, Yuanhong Chen, Fengbei Liu, Vasile

Yuyuan Liu 132 Jan 03, 2023
Hummingbird compiles trained ML models into tensor computation for faster inference.

Hummingbird Introduction Hummingbird is a library for compiling trained traditional ML models into tensor computations. Hummingbird allows users to se

Microsoft 3.1k Dec 30, 2022
Python library to receive live stream events like comments and gifts in realtime from TikTok LIVE.

TikTokLive A python library to connect to and read events from TikTok's LIVE service A python library to receive and decode livestream events such as

Isaac Kogan 277 Dec 23, 2022
This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization News: [2020/05/04] Added EGL rendering option for training data g

Shunsuke Saito 1.5k Jan 03, 2023