VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.

Related tags

Deep Learningvissl
Overview

CircleCIPRs Welcome

What's New

Below we share, in reverse chronological order, the updates and new releases in VISSL. All VISSL releases are available here.

Introduction

VISSL is a computer VIsion library for state-of-the-art Self-Supervised Learning research with PyTorch. VISSL aims to accelerate research cycle in self-supervised learning: from designing a new self-supervised task to evaluating the learned representations. Key features include:

Installation

See INSTALL.md.

Getting Started

Install VISSL by following the installation instructions. After installation, please see Getting Started with VISSL and the Colab Notebook to learn about basic usage.

Documentation

Learn more about VISSL at our documentation. And see the projects/ for some projects built on top of VISSL.

Tutorials

Get started with VISSL by trying one of the Colab tutorial notebooks.

Model Zoo and Baselines

We provide a large set of baseline results and trained models available for download in the VISSL Model Zoo.

Contributors

VISSL is written and maintained by the Facebook AI Research.

Development

We welcome new contributions to VISSL and we will be actively maintaining this library! Please refer to CONTRIBUTING.md for full instructions on how to run the code, tests and linter, and submit your pull requests.

License

VISSL is released under MIT license.

Citing VISSL

If you find VISSL useful in your research or wish to refer to the baseline results published in the Model Zoo, please use the following BibTeX entry.

@misc{goyal2021vissl,
  author =       {Priya Goyal and Quentin Duval and Jeremy Reizenstein and Matthew Leavitt and Min Xu and
                  Benjamin Lefaudeux and Mannat Singh and Vinicius Reis and Mathilde Caron and Piotr Bojanowski and
                  Armand Joulin and Ishan Misra},
  title =        {VISSL},
  howpublished = {\url{https://github.com/facebookresearch/vissl}},
  year =         {2021}
}
Comments
  • Barlow Twins implementation

    Barlow Twins implementation

    Required (TBC)

    • [X] BarlowTwinsLoss and Criterion
    • [x] Documentation
      • [X] Loss
      • [x] SSL Approaches + Index
      • [x] Model Zoo
      • [x] Project
    • [x] Default configs
      • [x] pretrain
      • [X] test/integration
      • [X] debugging/pretrain
    • [x] Benchmarks
      • [x] ImageNet: 70.75 for 300 epochs
      • [x] Imagenette 160: 88.8 Top1 accuracy

    closes #229

    CLA Signed 
    opened by OlivierDehaene 78
  • [Proposal] Improve the DATA_LIMIT atribute to be able to handle more uses cases

    [Proposal] Improve the DATA_LIMIT atribute to be able to handle more uses cases

    This PR is a draft, pushed for visibility and discussion.

    The additional uses cases I propose to support are:

    • being able to sub-select part of a dataset in a balanced way (each label is included the same number of time)
    • being able to sub-select exclusive parts of the same dataset (for instance to have a validation set that does not intersect with a training set, useful for HP searches)
    • make sure that this sub-sampling is deterministic (same seed across all distributed workers)

    This would avoid having to create sub-sets of datasets such as ImageNet to test on 1% of each label for instance. It would also allow to benchmark SSL algorithms on low data regime in a more flexible way.

    /!\ This PR introduces a breaking change (DATA_LIMIT is not an integer anymore but a structure)

    This PR includes:

    • unit tests for the sub-sampling strategies
    • update of all configuration using the DATA_LIMIT attribute
    CLA Signed Merged 
    opened by QuentinDuval 29
  • Loading Trained Models

    Loading Trained Models

    Hello, I followed the given tutorials and manged to train a model on a custom dataset. The training seemed to work but I can't figure out how to use the trained model. I tried building the model as follows:

    import yaml
    from vissl.utils.hydra_config import AttrDict
    cfg = yaml.load(open("path_to_config_yaml"), loaded=yaml.FullLoader)["config"]
    cfg = AttrDict(cfg)
    
    from vissl.model import build_model 
    model = build_model(cfg.MODL, cfg.OPTIMIZER)
    

    Where path_to_config_yaml is the path to the same config as the one used in training (configs/config/quick_1gpu_resnet50_simclr.yaml). The following error accord:

    AttributeError: AttrDict object has no attribute FEATURE_EVAL_SETTINGS.
    

    Any ideas on how to solve this? Otherwise, is there a tutorial which explains how to load trained models? I have read How to Load Pretrained Models but couldn't really understand.

    If more information is needed, please comment and I'll provide it Thanks!

    awaiting-user-response 
    opened by ItamarSafriel 22
  • [WIP] Fixes for release

    [WIP] Fixes for release

    1. Add in appropriate pytorch/cuda verisons for building apex in conda_apex and conda_vissl.
    2. Separate out integration_tests.sh, as this was repeating unit tests in the apex builds.
    3. Make #in_temporary_directory exception-safe -- when a test failed using this, all subsequent tests would fail with:
    ======================================================================
    ERROR: test_restart_after_preemption_at_epoch (test_state_checkpointing.TestStateCheckpointing)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/private/home/iseessel/conda-bld/vissl_1633020455653/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/vissl/utils/test_utils.py", line 86, in wrapped_test
        return test_function(*args, **kwargs)
      File "/private/home/iseessel/conda-bld/vissl_1633020455653/test_tmp/tests/test_state_checkpointing.py", line 80, in test_restart_after_preemption_at_epoch
        with in_temporary_directory():
      File "/private/home/iseessel/conda-bld/vissl_1633020455653/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/contextlib.py", line 81, in _enter_
        return next(self.gen)
      File "/private/home/iseessel/conda-bld/vissl_1633020455653/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/vissl/utils/test_utils.py", line 29, in in_temporary_directory
        old_cwd = os.getcwd()
    FileNotFoundError: [Errno 2] No such file or directory
    
    1. Destroy process group after each test in test_tasks.py. After building, conda runs the unit tests for vissl, and the same process group is used after the initial test. Since we start on GPU tests, we use the nccl backend and we keep using it throughout the tests. One of the tests requires the gloo backend, since it calls all_gather on cpu tensors. Note we don't get this problem with circle-ci because we split out the tests. The specific error is:
    ERROR: test_run_0_config_test_cpu_test_test_cpu_regnet_moco_yaml (test_tasks.TaskTest)
    Instantiate and run all the test tasks [with config_file_path='config=test/cpu_test/test_cpu_regnet_moco.yaml']
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/private/home/iseessel/conda-bld/vissl_1633035886061/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/parameterized/parameterized.py", line 533, in standalone_func
        return func(*(a + p.args), **p.kwargs)
      File "/private/home/iseessel/conda-bld/vissl_1633035886061/test_tmp/tests/test_tasks.py", line 50, in test_run
        hook_generator=default_hook_generator,
      File "/private/home/iseessel/conda-bld/vissl_1633035886061/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/vissl/engines/train.py", line 130, in train_main
        trainer.train()
      File "/private/home/iseessel/conda-bld/vissl_1633035886061/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/vissl/trainer/trainer_main.py", line 201, in train
        raise e
      File "/private/home/iseessel/conda-bld/vissl_1633035886061/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/vissl/trainer/trainer_main.py", line 193, in train
        task = train_step_fn(task)
      File "/private/home/iseessel/conda-bld/vissl_1633035886061/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/vissl/trainer/train_steps/standard_train_step.py", line 158, in standard_train_step
        local_loss = task.loss(model_output, target)
      File "/private/home/iseessel/conda-bld/vissl_1633035886061/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/private/home/iseessel/conda-bld/vissl_1633035886061/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/vissl/losses/moco_loss.py", line 152, in forward
        self._dequeue_and_enqueue(self.key)
      File "/private/home/iseessel/conda-bld/vissl_1633035886061/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
        return func(*args, **kwargs)
      File "/private/home/iseessel/conda-bld/vissl_1633035886061/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/vissl/losses/moco_loss.py", line 89, in _dequeue_and_enqueue
        keys = concat_all_gather(key)
      File "/private/home/iseessel/conda-bld/vissl_1633035886061/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
        return func(*args, **kwargs)
      File "/private/home/iseessel/conda-bld/vissl_1633035886061/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/vissl/utils/misc.py", line 230, in concat_all_gather
        torch.distributed.all_gather(tensors_gather, tensor, async_op=False)
      File "/private/home/iseessel/conda-bld/vissl_1633035886061/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 1863, in all_gather
        work = default_pg.allgather([tensor_list], [tensor])
    RuntimeError: Tensors must be CUDA and dense
    
    ----------------------------------------------------------------------
    Ran 1455 tests in 2664.959s
    
    FAILED (errors=1)
    Tests failed for vissl-0.1.5-py36.tar.bz2 - moving package to /private/home/iseessel/conda-bld/broken
    WARNING:conda_build.build:Tests failed for vissl-0.1.5-py36.tar.bz2 - moving package to /private/home/iseessel/conda-bld/broken
    WARNING conda_build.build:tests_failed(2955): Tests failed for vissl-0.1.5-py36.tar.bz2 - moving package to /private/home/iseessel/conda-bld/broken
    TESTS FAILED: vissl-0.1.5-py36.tar.bz2
    
    1. Use specific commit of fairscale as per circle-ci documentation.
    CLA Signed 
    opened by iseessel 18
  • How to get names of images from idx of indexes_all in DC_V2?

    How to get names of images from idx of indexes_all in DC_V2?

    ❓How to get names of images from IDX of indexes_all in DC_V2?

    Dear @iseessel, @QuentinDuval

    Introduction -

    Since the deepclusterv2_loss.py uses only loss_config therefore I cannot use what is suggested in #401. #401 uses other pieces of information from the main_config, for example, such as self.data_sources

    I understand DC_v2_loss.py does not require the rest of the config information for loss calculation.

    What have I done and understand -

    1. For evaluation of the clusters, I have dumped assignments and all index values corresponding to assignments. (example attached assignments_indexes )
    2. For data loading, I see dataloader is used here, _but I don't know how to get image names from the data loader_

    Question -

    1. Can somehow all the configurations be called so that I can use what is suggested in #401

    Cheers, DC

    opened by DC95 16
  • Continuous evaluations init commit

    Continuous evaluations init commit

    Create a script that continuously evaluates benchmarks as they become available from a pretraining.

    Uploading Screen Shot 2021-06-02 at 10.22.01 AM.png… Uploading Screen Shot 2021-06-02 at 10.22.19 AM.png… Screen Shot 2021-06-02 at 10 22 37 AM Screen Shot 2021-06-02 at 10 22 59 AM

    Next Steps:

    1. Deal with sharded checkpoints and their conversion
    2. Improve max_iteration logic
    3. Extend to FB infra.
    4. Write unit tests
    5. Think about how these tricky evaluation tests: https://github.com/facebookresearch/vissl/pull/325#issuecomment-853047525
    6. Try not to replicate so much logic in the class (e.g. get path names from vissl code, requires some refactoring).
    7. Look into email notifications.

    Testing:

    1. Run 8node Swav with 10 epochs with 3 different benchmark evaluations with different resource requirements. SUCCESS.

    json config:

    {
        "params": {
               "training_checkpoint_dir": "/checkpoint/iseessel/vissl/2021-06-09-11-19-12/checkpoints",
               "benchmarks": [
                   {
                       "evaluation_name": "clevr_count_linear",
                       "config_files": [
                           "config=config_local/eval_resnet_8gpu_transfer_clevr_count_linear_benchmark_suite_scheduler_test.yaml"
                       ]
                   },
                   {
                       "evaluation_name": "clevr_dist_linear",
                       "config_files": [
                           "config=config_local/eval_resnet_8gpu_transfer_clevr_dist_linear_benchmark_suite_scheduler_test.yaml"
                       ]
                   },
                   {
                       "evaluation_name": "in1k_linear",
                       "config_files": [
                           "config=config_local/eval_resnet_8gpu_transfer_in1k_linear_benchmark_suite_scheduler_test.yaml"
                       ]
                   }
               ],
               "evaluation_iter_freq": 600,
               "evaluation_phase_freq": 2,
               "evaluate_final_phase": true,
               "autoload_slurm_evaluator_checkpoint": false,
               "slurm_evaluator_checkpoint": null,
               "auto_retry_evaluations": true,
               "retry_evaluation_job_ids": [],
               "max_retries": 3,
               "pytorch_ports": [40050, 40051, 40052, 40053, 40054, 40055, 40056, 40057, 40058, 40059, 40060, 40061, 40062, 40063]
           },
           "slurm_options": {
               "PARTITION": "learnfair"
           }
    }
    

    Example snippet from evaluation_metrics.json:

    {
        "model_final_checkpoint_phase9": [
            {
                "checkpoint_dir": "/checkpoint/iseessel/vissl/2021-06-09-11-19-12/checkpoints/evaluations/model_final_checkpoint_phase9/clevr_count_linear/checkpoints",
                "config_files": [
                    "config=config_local/eval_resnet_8gpu_transfer_clevr_count_linear_benchmark_suite_scheduler_test.yaml",
                    "hydra.run.dir='/checkpoint/iseessel/vissl/2021-06-09-11-19-12/checkpoints/evaluations/model_final_checkpoint_phase9/clevr_count_linear'",
                    "config.CHECKPOINT.DIR='/checkpoint/iseessel/vissl/2021-06-09-11-19-12/checkpoints/evaluations/model_final_checkpoint_phase9/clevr_count_linear/checkpoints'",
                    "config.SLURM.LOG_FOLDER='/checkpoint/iseessel/vissl/2021-06-09-11-19-12/checkpoints/evaluations/model_final_checkpoint_phase9/clevr_count_linear'",
                    "config.SLURM.LOG_FOLDER='/checkpoint/iseessel/vissl/2021-06-09-11-19-12/checkpoints/evaluations/model_final_checkpoint_phase9/clevr_count_linear'",
                    "config.SLURM.USE_SLURM=true",
                    "config.MODEL.WEIGHTS_INIT.PARAMS_FILE='/checkpoint/iseessel/vissl/2021-06-09-11-19-12/checkpoints/model_final_checkpoint_phase9.torch'"
                ],
                "evaluation_name": "clevr_count_linear",
                "job_id": "42410489",
                "metrics": {
                    "test_accuracy_list_meter_top_1_res5": {
                        "iteration": 822,
                        "metric": 34.62,
                        "train_phase_idx": 2
                    },
                    "train_accuracy_list_meter_top_1_res5": {
                        "iteration": 822,
                        "metric": 33.8514,
                        "train_phase_idx": 2
                    }
                },
                "num_retries": 1,
                "slurm_checkpoint_dir": "/checkpoint/iseessel/vissl/2021-06-09-11-19-12/checkpoints/evaluations/model_final_checkpoint_phase9/clevr_count_linear/checkpoints",
                "slurm_log_dir": "/checkpoint/iseessel/vissl/2021-06-09-11-19-12/checkpoints/evaluations/model_final_checkpoint_phase9/clevr_count_linear",
                "slurm_state": "COMPLETED",
                "weights_init_params_file": "/checkpoint/iseessel/vissl/2021-06-09-11-19-12/checkpoints/model_final_checkpoint_phase9.torch"
            }, ...
    

    The following hold:

    1. Training completes appropriately, w/o errors.
    2. Able to resume checkpoints.
    3. Evaluation folder structure is as expected above.
    4. Best Metrics are extracted.
    CLA Signed 
    opened by iseessel 16
  • How to load a pretrained/finetuned VISSL model for inference?

    How to load a pretrained/finetuned VISSL model for inference?

    ❓ How to load a pretrained/finetuned VISSL model for inference?

    Preface: I am aware of #235 and of https://github.com/facebookresearch/vissl/blob/master/tutorials/Using_a_pretrained_model_for_inference.ipynb

    First of all, thanks a lot for making VISSL available - it's an awesome tool.

    However, I am struggling with using the models I've trained for simple inference. Specifically, I am trying to score images with a VISSL model that I've finetuned on my own simple dataset, having 4 classes. During training, the model achieved a high TOP-1 accuracy on the validation set. Consequently, I'd assume that when using the model for inference and scoring images from the validation set, I should see the same accuracy. Strangely enough the model predictions are rubbish, with the model predicting basically always one class. My guess is that I am doing something wrong when loading and preparing the model for inference. I'll provide the technical details below:

    Training

    I've finetuned a torchvision ResNet50 model, following the official tutorial https://vissl.ai/tutorials/Benchmark_Full_Finetuning_on_ImageNet_1K. Specifically, I've executed the following run command:

    python run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_in1k_semi_sup_fulltune_mod \
        config.DATA.TRAIN.DATA_SOURCES=[disk_folder] \
        config.DATA.TRAIN.LABEL_SOURCES=[disk_folder] \
        config.DATA.TRAIN.DATASET_NAMES=[mydata] \
        config.DATA.TRAIN.COPY_TO_LOCAL_DISK=False \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=32 \
        config.DATA.TEST.DATA_SOURCES=[disk_folder] \
        config.DATA.TEST.LABEL_SOURCES=[disk_folder] \
        config.DATA.TEST.DATASET_NAMES=[mydata] \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=32 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.CHECKPOINT.DIR="./checkpoints_finetune" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="resnet50-19c8e357.pth" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk._feature_blocks." \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME=""
    

    using a slightly modified yaml config compared to the base eval_resnet_8gpu_transfer_in1k_semi_sup_fulltune. The modifications only concern the HEAD:

      MODEL:
        TRUNK:
          NAME: resnet
          TRUNK_PARAMS:
            RESNETS:
              DEPTH: 50
        HEAD:
          PARAMS: [
            ["mlp", {"dims": [2048, 4]}],
          ]
    

    The TOP-1 accuracy during training reaches 98% on the training data and about 95% on the validation data. I've double checked that the model indeed is loading the intended data, that the targets are correctly used and that the model predictions during validation reflect on average the 95% accuracy by running the above command in pdb (python -m pdb run_distributed_engines.py ...) , setting breakpoints in standard_train_step in vissl/trainer/train_steps/standard_train_step.py and inspecting the contents of sampleand model_output. Everything looks plausible and consistent.

    Inference

    I have tried to load the model in "inference" mode following the suggestions in #235 and the tutorial https://github.com/facebookresearch/vissl/blob/master/tutorials/Using_a_pretrained_model_for_inference.ipynb (Note the transformation pipeline, which should one-to-one reproduce the transformations used during training in the testing phase - see eval_resnet_8gpu_transfer_in1k_semi_sup_fulltune)

    from omegaconf import OmegaConf
    from vissl.utils.hydra_config import AttrDict
    from vissl.models import build_model
    from classy_vision.generic.util import load_checkpoint
    from vissl.utils.checkpoint import init_model_from_weights
    from PIL import Image
    import torchvision.transforms as transforms
    
    config = OmegaConf.load("configs/config/eval_resnet_8gpu_transfer_in1k_semi_sup_fulltune_mod.yaml")
    default_config = OmegaConf.load("vissl/config/defaults.yaml")
    
    cfg = OmegaConf.merge(default_config, config)
    
    cfg = AttrDict(cfg)
    cfg.config.MODEL.WEIGHTS_INIT.PARAMS_FILE = "checkpoints_finetune/model_final_checkpoint_phase138.torch"
    cfg.config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY = True
    cfg.config.MODEL.FEATURE_EVAL_SETTINGS.SHOULD_FLATTEN_FEATS = False
    cfg.config.MODEL.FEATURE_EVAL_SETTINGS.LINEAR_EVAL_FEAT_POOL_OPS_MAP = [["res5avg", ["Identity", []]]]
    
    model = build_model(cfg.config.MODEL, cfg.config.OPTIMIZER)
    weights = load_checkpoint(checkpoint_path=cfg.config.MODEL.WEIGHTS_INIT.PARAMS_FILE)
    
    init_model_from_weights(
        config=cfg.config,
        model=model,
        state_dict=weights,
        state_dict_key_name="classy_state_dict",
        skip_layers=[],  # Use this if you do not want to load all layers
    )
    pipeline = transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
    ])
    

    I've then tried to score with this model images from the validiation set, expecting that in 95% of cases the correct target class is predicted:

    for i in range(4):
        print("Validation set target class ", i)
        img_dir = "mydata/val/{}".format(i)
        for img_name in sorted(os.listdir(img_dir))[:5]:
            img_fname = os.path.join(img_dir, img_name)
            image = Image.open(img_fname).convert("RGB")
            x = pipeline(image)
            features = model(x.unsqueeze(0))
            _, pred = features[0].float().topk(1,largest=True, sorted=True)
            print(img_fname, features, pred[0])
    

    But the predictions are all over the place:

    Validation set target class  0
    mydata/val/0/26b47d05f7a17b09fdca68f01ef42740.jpg [tensor([-8.2779, -0.6585,  4.4609,  5.0466], grad_fn=<AddBackward0>)] tensor(3)
    mydata/val/0/43eb8a990f72e5dd084dd926b233a5dc.jpg [tensor([-8.3000, -0.5172,  4.3451,  5.1021], grad_fn=<AddBackward0>)] tensor(3)
    mydata/val/0/49b1838adc0b4d16b5f5a282c4a13333.jpg [tensor([-8.1877, -0.7219,  5.0734,  4.4780], grad_fn=<AddBackward0>)] tensor(2)
    mydata/val/0/8af96ab34dee1163c1c23910b5e3c37e.jpg [tensor([-8.0564, -0.7757,  4.5512,  4.9206], grad_fn=<AddBackward0>)] tensor(3)
    mydata/val/0/de2bae914295d209ca4b5b1772e8b89e.jpg [tensor([-8.3365, -0.6826,  4.4628,  5.1643], grad_fn=<AddBackward0>)] tensor(3)
    Validation set target class  1
    mydata/val/1/006d0afb20f6f92a742978f1a65e8ecc.jpg [tensor([-8.5201, -0.2442,  4.7602,  4.6363], grad_fn=<AddBackward0>)] tensor(2)
    mydata/val/1/0278d05c83a725304fa506d26f15f332.jpg [tensor([-8.6195, -0.2458,  4.6237,  4.9525], grad_fn=<AddBackward0>)] tensor(3)
    mydata/val/1/076c219c1f2ec1859ff3c3cd6a4fce0f.jpg [tensor([-8.2978, -0.4898,  4.9301,  4.5161], grad_fn=<AddBackward0>)] tensor(2)
    mydata/val/1/0af6840fd3ae8ec0f43f70fd0f9b80d2.jpg [tensor([-8.4313, -0.8287,  4.7313,  5.1053], grad_fn=<AddBackward0>)] tensor(3)
    mydata/val/1/0bc4bddf1f3689def5df97d557a2de3a.jpg [tensor([-8.4514, -0.3272,  4.6578,  4.7634], grad_fn=<AddBackward0>)] tensor(3)
    Validation set target class  2
    mydata/val/2/0019ba30aa56fc050113076326ee3ec3.jpg [tensor([-8.2777, -0.5599,  4.8195,  4.5890], grad_fn=<AddBackward0>)] tensor(2)
    mydata/val/2/00221133dfde2e3196690a0e4f6e6114.jpg [tensor([-8.3178, -0.4367,  4.4543,  4.9198], grad_fn=<AddBackward0>)] tensor(3)
    mydata/val/2/0023c8338336625f71209b7a80a6b093.jpg [tensor([-8.3114, -0.6324,  4.8827,  4.7317], grad_fn=<AddBackward0>)] tensor(2)
    mydata/val/2/0042f8e6f0c7ec5d2d4ae8f467ba3365.jpg [tensor([-8.4777, -0.7538,  5.0929,  4.8214], grad_fn=<AddBackward0>)] tensor(2)
    mydata/val/2/0051db26771cd7c3f91019751a2006ff.jpg [tensor([-8.2487, -0.8269,  4.4676,  5.2313], grad_fn=<AddBackward0>)] tensor(3)
    Validation set target class  3
    mydata/val/3/001fcdf186182ee139e9c7aa710e5b50.jpg [tensor([-8.4400, -0.7982,  4.2155,  5.6232], grad_fn=<AddBackward0>)] tensor(3)
    mydata/val/3/00afccfd48cb0155ee0a9f74553601ca.jpg [tensor([-8.3494, -0.7743,  4.4209,  5.3379], grad_fn=<AddBackward0>)] tensor(3)
    mydata/val/3/01a68c73059c25c045c5101a72f314ab.jpg [tensor([-8.1762, -0.6270,  4.3527,  5.0999], grad_fn=<AddBackward0>)] tensor(3)
    mydata/val/3/02e9cd3870cae126e00573bbbb24874a.jpg [tensor([-8.5710, -0.5589,  4.0382,  5.7221], grad_fn=<AddBackward0>)] tensor(3)
    mydata/val/3/03946f596354dd4a01b5f0ee47ae2a8a.jpg [tensor([-8.3258, -0.5837,  4.4099,  5.0943], grad_fn=<AddBackward0>)] tensor(3)
    

    Based on the comments in defaults.yaml (in the FEATURE_EVAL_SETTINGS section), I've tried different config setups, such as

    cfg.config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON = True
    cfg.config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY = False
    cfg.config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD = True
    cfg.config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD = True
    

    but the results basically remained the same.

    I very much suspect that I am messing something up somewhere during loading and preparing the trained model for inference. Could you please point me in the right direction? I can provide more technical details if requried.

    awaiting-user-response 
    opened by unoebauer 15
  • Add inaturalist2018 script for inaturalist2018 disk_filelist creation

    Add inaturalist2018 script for inaturalist2018 disk_filelist creation

    Following example: https://github.com/facebookresearch/vissl/pull/265

    1. Adapt inaturalist2018 script for open source disk_filelist creation.
    2. Update read me.
    3. Update linear benchmark for comment about data preparation.

    Testing Steps:

    1. CircleCI.
    2. Run inaturalist2018 benchmark with data prepared from script.
    3. Docs proofreading
    CLA Signed Merged 
    opened by iseessel 15
  • Model not training in Colab

    Model not training in Colab

    Problem: command exits after a few seconds without training No checkpoints output

    Command:

    !python3 run_distributed_engines.py
    hydra.verbose=true
    config=supervised_1gpu_resnet_example
    config.DATA.TRAIN.DATA_SOURCES=[disk_folder]
    config.DATA.TRAIN.LABEL_SOURCES=[disk_folder]
    config.DATA.TRAIN.DATASET_NAMES=[dummy_data_folder]
    config.DATA.TRAIN.DATA_PATHS=[/content/dummy_data/train]
    config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=2
    config.DATA.TEST.DATA_SOURCES=[disk_folder]
    config.DATA.TEST.LABEL_SOURCES=[disk_folder]
    config.DATA.TEST.DATASET_NAMES=[dummy_data_folder]
    config.DATA.TEST.DATA_PATHS=[/content/dummy_data/val]
    config.DATA.TEST.BATCHSIZE_PER_REPLICA=2
    config.DISTRIBUTED.NUM_NODES=1
    config.DISTRIBUTED.NUM_PROC_PER_NODE=1
    config.OPTIMIZER.num_epochs=2
    config.OPTIMIZER.param_schedulers.lr.values=[0.01,0.001]
    config.OPTIMIZER.param_schedulers.lr.milestones=[1]
    config.TENSORBOARD_SETUP.USE_TENSORBOARD=true
    config.CHECKPOINT.DIR="./checkpoints"

    Output:

    overrides: ['hydra.verbose=true', 'config=supervised_1gpu_resnet_example', 'config.DATA.TRAIN.DATA_SOURCES=[disk_folder]', 'config.DATA.TRAIN.LABEL_SOURCES=[disk_folder]', 'config.DATA.TRAIN.DATASET_NAMES=[dummy_data_folder]', 'config.DATA.TRAIN.DATA_PATHS=[/content/dummy_data/train]', 'config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=2', 'config.DATA.TEST.DATA_SOURCES=[disk_folder]', 'config.DATA.TEST.LABEL_SOURCES=[disk_folder]', 'config.DATA.TEST.DATASET_NAMES=[dummy_data_folder]', 'config.DATA.TEST.DATA_PATHS=[/content/dummy_data/val]', 'config.DATA.TEST.BATCHSIZE_PER_REPLICA=2', 'config.DISTRIBUTED.NUM_NODES=1', 'config.DISTRIBUTED.NUM_PROC_PER_NODE=1', 'config.OPTIMIZER.num_epochs=2', 'config.OPTIMIZER.param_schedulers.lr.values=[0.01,0.001]', 'config.OPTIMIZER.param_schedulers.lr.milestones=[1]', 'config.TENSORBOARD_SETUP.USE_TENSORBOARD=true', 'config.CHECKPOINT.DIR=./checkpoints', 'hydra.verbose=true']

    INFO 2021-03-28 03:48:24,957 init.py: 32: Provided Config has latest version: 1 INFO 2021-03-28 03:48:24,958 run_distributed_engines.py: 163: Spawning process for node_id: 0, local_rank: 0, dist_rank: 0, dist_run_id: localhost:42573 INFO 2021-03-28 03:48:24,958 train.py: 66: Env set for rank: 0, dist_rank: 0 INFO 2021-03-28 03:48:24,958 env.py: 41: CLICOLOR: 1 INFO 2021-03-28 03:48:24,958 env.py: 41: CLOUDSDK_CONFIG: /content/.config INFO 2021-03-28 03:48:24,959 env.py: 41: CLOUDSDK_PYTHON: python3 INFO 2021-03-28 03:48:24,959 env.py: 41: COLAB_GPU: 1 INFO 2021-03-28 03:48:24,959 env.py: 41: CUDA_VERSION: 11.0.3 INFO 2021-03-28 03:48:24,959 env.py: 41: CUDNN_VERSION: 8.0.4.30 INFO 2021-03-28 03:48:24,959 env.py: 41: DATALAB_SETTINGS_OVERRIDES: {"kernelManagerProxyPort":6000,"kernelManagerProxyHost":"172.28.0.3","jupyterArgs":["--ip="172.28.0.2""],"debugAdapterMultiplexerPath":"/usr/local/bin/dap_multiplexer"} INFO 2021-03-28 03:48:24,959 env.py: 41: DEBIAN_FRONTEND: noninteractive INFO 2021-03-28 03:48:24,959 env.py: 41: ENV: /root/.bashrc INFO 2021-03-28 03:48:24,959 env.py: 41: GCE_METADATA_TIMEOUT: 0 INFO 2021-03-28 03:48:24,959 env.py: 41: GCS_READ_CACHE_BLOCK_SIZE_MB: 16 INFO 2021-03-28 03:48:24,959 env.py: 41: GIT_PAGER: cat INFO 2021-03-28 03:48:24,959 env.py: 41: GLIBCPP_FORCE_NEW: 1 INFO 2021-03-28 03:48:24,959 env.py: 41: GLIBCXX_FORCE_NEW: 1 INFO 2021-03-28 03:48:24,959 env.py: 41: HOME: /root INFO 2021-03-28 03:48:24,959 env.py: 41: HOSTNAME: 392565ebe3a4 INFO 2021-03-28 03:48:24,960 env.py: 41: JPY_PARENT_PID: 58 INFO 2021-03-28 03:48:24,960 env.py: 41: LANG: en_US.UTF-8 INFO 2021-03-28 03:48:24,960 env.py: 41: LAST_FORCED_REBUILD: 20210316 INFO 2021-03-28 03:48:24,960 env.py: 41: LD_LIBRARY_PATH: /usr/lib64-nvidia INFO 2021-03-28 03:48:24,960 env.py: 41: LD_PRELOAD: /usr/lib/x86_64-linux-gnu/libtcmalloc.so.4 INFO 2021-03-28 03:48:24,960 env.py: 41: LIBRARY_PATH: /usr/local/cuda/lib64/stubs INFO 2021-03-28 03:48:24,960 env.py: 41: LOCAL_RANK: 0 INFO 2021-03-28 03:48:24,960 env.py: 41: MPLBACKEND: module://ipykernel.pylab.backend_inline INFO 2021-03-28 03:48:24,960 env.py: 41: NCCL_VERSION: 2.7.8 INFO 2021-03-28 03:48:24,960 env.py: 41: NO_GCE_CHECK: True INFO 2021-03-28 03:48:24,960 env.py: 41: NVIDIA_DRIVER_CAPABILITIES: compute,utility INFO 2021-03-28 03:48:24,960 env.py: 41: NVIDIA_REQUIRE_CUDA: cuda>=11.0 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=440,driver<441 brand=tesla,driver>=450,driver<451 INFO 2021-03-28 03:48:24,960 env.py: 41: NVIDIA_VISIBLE_DEVICES: all INFO 2021-03-28 03:48:24,960 env.py: 41: OLDPWD: / INFO 2021-03-28 03:48:24,960 env.py: 41: PAGER: cat INFO 2021-03-28 03:48:24,961 env.py: 41: PATH: /usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin INFO 2021-03-28 03:48:24,961 env.py: 41: PWD: /content INFO 2021-03-28 03:48:24,961 env.py: 41: PYDEVD_USE_FRAME_EVAL: NO INFO 2021-03-28 03:48:24,961 env.py: 41: PYTHONPATH: /env/python INFO 2021-03-28 03:48:24,961 env.py: 41: PYTHONWARNINGS: ignore:::pip._internal.cli.base_command INFO 2021-03-28 03:48:24,961 env.py: 41: RANK: 0 INFO 2021-03-28 03:48:24,961 env.py: 41: SHELL: /bin/bash INFO 2021-03-28 03:48:24,961 env.py: 41: SHLVL: 1 INFO 2021-03-28 03:48:24,961 env.py: 41: TBE_CREDS_ADDR: 172.28.0.1:8008 INFO 2021-03-28 03:48:24,961 env.py: 41: TERM: xterm-color INFO 2021-03-28 03:48:24,961 env.py: 41: TF_FORCE_GPU_ALLOW_GROWTH: true INFO 2021-03-28 03:48:24,961 env.py: 41: WORLD_SIZE: 1 INFO 2021-03-28 03:48:24,961 env.py: 41: _: /usr/bin/python3 INFO 2021-03-28 03:48:24,961 env.py: 41: __EGL_VENDOR_LIBRARY_DIRS: /usr/lib64-nvidia:/usr/share/glvnd/egl_vendor.d/ INFO 2021-03-28 03:48:24,962 misc.py: 86: Set start method of multiprocessing to fork INFO 2021-03-28 03:48:24,962 train.py: 77: Setting seed.... INFO 2021-03-28 03:48:24,962 misc.py: 99: MACHINE SEED: 0 INFO 2021-03-28 03:48:24,980 hydra_config.py: 140: Training with config: INFO 2021-03-28 03:48:24,986 hydra_config.py: 144: {'CHECKPOINT': {'APPEND_DISTR_RUN_ID': False, 'AUTO_RESUME': True, 'BACKEND': 'disk', 'CHECKPOINT_FREQUENCY': 1, 'CHECKPOINT_ITER_FREQUENCY': -1, 'DIR': './checkpoints', 'LATEST_CHECKPOINT_RESUME_FILE_NUM': 1, 'OVERWRITE_EXISTING': False, 'USE_SYMLINK_CHECKPOINT_FOR_RESUME': False}, 'CLUSTERFIT': {'CLUSTER_BACKEND': 'faiss', 'FEATURES': {'DATASET_NAME': '', 'DATA_PARTITION': 'TRAIN', 'LAYER_NAME': ''}, 'NUM_CLUSTERS': 16000, 'N_ITER': 50}, 'DATA': {'DDP_BUCKET_CAP_MB': 25, 'ENABLE_ASYNC_GPU_COPY': True, 'NUM_DATALOADER_WORKERS': 5, 'PIN_MEMORY': True, 'TEST': {'BATCHSIZE_PER_REPLICA': 2, 'COLLATE_FUNCTION': 'default_collate', 'COLLATE_FUNCTION_PARAMS': {}, 'COPY_DESTINATION_DIR': '', 'COPY_TO_LOCAL_DISK': False, 'DATASET_NAMES': ['dummy_data_folder'], 'DATA_LIMIT': -1, 'DATA_PATHS': ['/content/dummy_data/val'], 'DATA_SOURCES': ['disk_folder'], 'DEFAULT_GRAY_IMG_SIZE': 224, 'DROP_LAST': False, 'ENABLE_QUEUE_DATASET': False, 'INPUT_KEY_NAMES': ['data'], 'LABEL_PATHS': [], 'LABEL_SOURCES': ['disk_folder'], 'LABEL_TYPE': 'standard', 'MMAP_MODE': True, 'TARGET_KEY_NAMES': ['label'], 'TRANSFORMS': [{'name': 'Resize', 'size': 256}, {'name': 'CenterCrop', 'size': 224}, {'name': 'ToTensor'}, {'mean': [0.485, 0.456, 0.406], 'name': 'Normalize', 'std': [0.229, 0.224, 0.225]}], 'USE_STATEFUL_DISTRIBUTED_SAMPLER': False}, 'TRAIN': {'BATCHSIZE_PER_REPLICA': 2, 'COLLATE_FUNCTION': 'default_collate', 'COLLATE_FUNCTION_PARAMS': {}, 'COPY_DESTINATION_DIR': '', 'COPY_TO_LOCAL_DISK': False, 'DATASET_NAMES': ['dummy_data_folder'], 'DATA_LIMIT': -1, 'DATA_PATHS': ['/content/dummy_data/train'], 'DATA_SOURCES': ['disk_folder'], 'DEFAULT_GRAY_IMG_SIZE': 224, 'DROP_LAST': False, 'ENABLE_QUEUE_DATASET': False, 'INPUT_KEY_NAMES': ['data'], 'LABEL_PATHS': [], 'LABEL_SOURCES': ['disk_folder'], 'LABEL_TYPE': 'standard', 'MMAP_MODE': True, 'TARGET_KEY_NAMES': ['label'], 'TRANSFORMS': [{'name': 'RandomResizedCrop', 'size': 224}, {'name': 'RandomHorizontalFlip'}, {'brightness': 0.4, 'contrast': 0.4, 'hue': 0.4, 'name': 'ColorJitter', 'saturation': 0.4}, {'name': 'ToTensor'}, {'mean': [0.485, 0.456, 0.406], 'name': 'Normalize', 'std': [0.229, 0.224, 0.225]}], 'USE_STATEFUL_DISTRIBUTED_SAMPLER': False}}, 'DISTRIBUTED': {'BACKEND': 'nccl', 'BROADCAST_BUFFERS': True, 'INIT_METHOD': 'tcp', 'MANUAL_GRADIENT_REDUCTION': False, 'NCCL_DEBUG': False, 'NCCL_SOCKET_NTHREADS': '', 'NUM_NODES': 1, 'NUM_PROC_PER_NODE': 1, 'RUN_ID': 'auto'}, 'IMG_RETRIEVAL': {'DATASET_PATH': '', 'EVAL_BINARY_PATH': '', 'EVAL_DATASET_NAME': 'Paris', 'FEATS_PROCESSING_TYPE': '', 'GEM_POOL_POWER': 4.0, 'N_PCA': 512, 'RESIZE_IMG': 1024, 'SHOULD_TRAIN_PCA_OR_WHITENING': True, 'SPATIAL_LEVELS': 3, 'TEMP_DIR': '/tmp/instance_retrieval/', 'TRAIN_DATASET_NAME': 'Oxford', 'WHITEN_IMG_LIST': ''}, 'LOG_FREQUENCY': 100, 'LOSS': {'CrossEntropyLoss': {'ignore_index': -1}, 'bce_logits_multiple_output_single_target': {'normalize_output': False, 'reduction': 'none', 'world_size': 1}, 'cross_entropy_multiple_output_single_target': {'ignore_index': -1, 'normalize_output': False, 'reduction': 'mean', 'temperature': 1.0, 'weight': None}, 'deepclusterv2_loss': {'BATCHSIZE_PER_REPLICA': 256, 'DROP_LAST': True, 'kmeans_iters': 10, 'memory_params': {'crops_for_mb': [0], 'embedding_dim': 128}, 'num_clusters': [3000, 3000, 3000], 'num_crops': 2, 'num_train_samples': -1, 'temperature': 0.1}, 'moco_loss': {'embedding_dim': 128, 'momentum': 0.999, 'queue_size': 65536, 'temperature': 0.2}, 'multicrop_simclr_info_nce_loss': {'buffer_params': {'effective_batch_size': 4096, 'embedding_dim': 128, 'world_size': 64}, 'num_crops': 2, 'temperature': 0.1}, 'name': 'cross_entropy_multiple_output_single_target', 'nce_loss_with_memory': {'loss_type': 'nce', 'loss_weights': [1.0], 'memory_params': {'embedding_dim': 128, 'memory_size': -1, 'momentum': 0.5, 'norm_init': True, 'update_mem_on_forward': True}, 'negative_sampling_params': {'num_negatives': 16000, 'type': 'random'}, 'norm_constant': -1, 'norm_embedding': True, 'num_train_samples': -1, 'temperature': 0.07, 'update_mem_with_emb_index': -100}, 'simclr_info_nce_loss': {'buffer_params': {'effective_batch_size': 4096, 'embedding_dim': 128, 'world_size': 64}, 'temperature': 0.1}, 'swav_loss': {'crops_for_assign': [0, 1], 'embedding_dim': 128, 'epsilon': 0.05, 'normalize_last_layer': True, 'num_crops': 2, 'num_iters': 3, 'num_prototypes': [3000], 'output_dir': '', 'queue': {'local_queue_length': 0, 'queue_length': 0, 'start_iter': 0}, 'temp_hard_assignment_iters': 0, 'temperature': 0.1, 'use_double_precision': False}, 'swav_momentum_loss': {'crops_for_assign': [0, 1], 'embedding_dim': 128, 'epsilon': 0.05, 'momentum': 0.99, 'momentum_eval_mode_iter_start': 0, 'normalize_last_layer': True, 'num_crops': 2, 'num_iters': 3, 'num_prototypes': [3000], 'queue': {'local_queue_length': 0, 'queue_length': 0, 'start_iter': 0}, 'temperature': 0.1, 'use_double_precision': False}}, 'MACHINE': {'DEVICE': 'gpu'}, 'METERS': {'accuracy_list_meter': {'meter_names': [], 'num_meters': 1, 'topk_values': [1, 5]}, 'enable_training_meter': True, 'mean_ap_list_meter': {'max_cpu_capacity': -1, 'meter_names': [], 'num_classes': 9605, 'num_meters': 1}, 'name': 'accuracy_list_meter'}, 'MODEL': {'ACTIVATION_CHECKPOINTING': {'NUM_ACTIVATION_CHECKPOINTING_SPLITS': 2, 'USE_ACTIVATION_CHECKPOINTING': False}, 'AMP_PARAMS': {'AMP_ARGS': {'opt_level': 'O1'}, 'AMP_TYPE': 'apex', 'USE_AMP': False}, 'CUDA_CACHE': {'CLEAR_CUDA_CACHE': False, 'CLEAR_FREQ': 100}, 'FEATURE_EVAL_SETTINGS': {'EVAL_MODE_ON': False, 'EVAL_TRUNK_AND_HEAD': False, 'EXTRACT_TRUNK_FEATURES_ONLY': False, 'FREEZE_TRUNK_AND_HEAD': False, 'FREEZE_TRUNK_ONLY': False, 'LINEAR_EVAL_FEAT_POOL_OPS_MAP': [], 'SHOULD_FLATTEN_FEATS': True}, 'HEAD': {'BATCHNORM_EPS': 1e-05, 'BATCHNORM_MOMENTUM': 0.1, 'PARAMS': [['mlp', {'dims': [2048, 1000]}]], 'PARAMS_MULTIPLIER': 1.0}, 'INPUT_TYPE': 'rgb', 'MODEL_COMPLEXITY': {'COMPUTE_COMPLEXITY': False, 'INPUT_SHAPE': [3, 224, 224]}, 'MULTI_INPUT_HEAD_MAPPING': [], 'NON_TRAINABLE_PARAMS': [], 'SINGLE_PASS_EVERY_CROP': False, 'SYNC_BN_CONFIG': {'CONVERT_BN_TO_SYNC_BN': False, 'GROUP_SIZE': -1, 'SYNC_BN_TYPE': 'pytorch'}, 'TEMP_FROZEN_PARAMS_ITER_MAP': [], 'TRUNK': {'NAME': 'resnet', 'TRUNK_PARAMS': {'EFFICIENT_NETS': {}, 'REGNET': {}, 'RESNETS': {'DEPTH': 50, 'GROUPS': 1, 'LAYER4_STRIDE': 2, 'NORM': 'BatchNorm', 'WIDTH_MULTIPLIER': 1, 'WIDTH_PER_GROUP': 64, 'ZERO_INIT_RESIDUAL': False}}}, 'WEIGHTS_INIT': {'APPEND_PREFIX': '', 'PARAMS_FILE': '', 'REMOVE_PREFIX': '', 'SKIP_LAYERS': ['num_batches_tracked'], 'STATE_DICT_KEY_NAME': 'classy_state_dict'}}, 'MONITOR_PERF_STATS': False, 'MULTI_PROCESSING_METHOD': 'fork', 'NEAREST_NEIGHBOR': {'L2_NORM_FEATS': False, 'SIGMA': 0.1, 'TOPK': 200}, 'OPTIMIZER': {'head_optimizer_params': {'use_different_lr': False, 'use_different_wd': False, 'weight_decay': 0.0001}, 'larc_config': {'clip': False, 'eps': 1e-08, 'trust_coefficient': 0.001}, 'momentum': 0.9, 'name': 'sgd', 'nesterov': True, 'num_epochs': 2, 'param_schedulers': {'lr': {'auto_lr_scaling': {'auto_scale': True, 'base_lr_batch_size': 256, 'base_value': 0.1}, 'end_value': 0.0, 'interval_scaling': [], 'lengths': [], 'milestones': [1], 'name': 'multistep', 'schedulers': [], 'start_value': 0.1, 'update_interval': 'epoch', 'value': 0.1, 'values': [0.00078125, 7.813e-05]}, 'lr_head': {'auto_lr_scaling': {'auto_scale': True, 'base_lr_batch_size': 256, 'base_value': 0.1}, 'end_value': 0.0, 'interval_scaling': [], 'lengths': [], 'milestones': [1], 'name': 'multistep', 'schedulers': [], 'start_value': 0.1, 'update_interval': 'epoch', 'value': 0.1, 'values': [0.00078125, 7.813e-05]}}, 'regularize_bias': True, 'regularize_bn': False, 'use_larc': False, 'weight_decay': 0.0001}, 'PERF_STAT_FREQUENCY': -1, 'ROLLING_BTIME_FREQ': -1, 'SEED_VALUE': 0, 'SVM': {'cls_list': [], 'costs': {'base': -1.0, 'costs_list': [0.1, 0.01], 'power_range': [4, 20]}, 'cross_val_folds': 3, 'dual': True, 'force_retrain': False, 'loss': 'squared_hinge', 'low_shot': {'dataset_name': 'voc', 'k_values': [1, 2, 4, 8, 16, 32, 64, 96], 'sample_inds': [1, 2, 3, 4, 5]}, 'max_iter': 2000, 'normalize': True, 'penalty': 'l2'}, 'TENSORBOARD_SETUP': {'EXPERIMENT_LOG_DIR': 'tensorboard', 'FLUSH_EVERY_N_MIN': 5, 'LOG_DIR': '.', 'LOG_PARAMS': True, 'LOG_PARAMS_EVERY_N_ITERS': 310, 'LOG_PARAMS_GRADIENTS': True, 'USE_TENSORBOARD': True}, 'TEST_EVERY_NUM_EPOCH': 1, 'TEST_MODEL': True, 'TEST_ONLY': False, 'TRAINER': {'TASK_NAME': 'self_supervision_task', 'TRAIN_STEP_NAME': 'standard_train_step'}, 'VERBOSE': True} INFO 2021-03-28 03:48:25,689 train.py: 89: System config:


    sys.platform linux Python 3.7.10 (default, Feb 20 2021, 21:17:23) [GCC 7.5.0] numpy 1.19.5 Pillow 7.0.0 vissl 0.1.5 @/usr/local/lib/python3.7/dist-packages/vissl GPU available True GPU 0 Tesla P100-PCIE-16GB CUDA_HOME /usr/local/cuda torchvision 0.6.1+cu101 @/usr/local/lib/python3.7/dist-packages/torchvision hydra 1.0.6 @/usr/local/lib/python3.7/dist-packages/hydra classy_vision 0.6.0.dev @/usr/local/lib/python3.7/dist-packages/classy_vision tensorboard 1.15.0 apex 0.1 @/usr/local/lib/python3.7/dist-packages/apex cv2 4.1.2 PyTorch 1.8.1+cu102 @/usr/local/lib/python3.7/dist-packages/torch PyTorch debug build False


    PyTorch built with:

    • GCC 7.3
    • C++ Version: 201402
    • Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
    • Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
    • OpenMP 201511 (a.k.a. OpenMP 4.5)
    • NNPACK is enabled
    • CPU capability usage: AVX2
    • CUDA Runtime 10.2
    • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70
    • CuDNN 7.6.5
    • Magma 2.5.2
    • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

    CPU info:


    Architecture x86_64 CPU op-mode(s) 32-bit, 64-bit Byte Order Little Endian CPU(s) 4 On-line CPU(s) list 0-3 Thread(s) per core 2 Core(s) per socket 2 Socket(s) 1 NUMA node(s) 1 Vendor ID GenuineIntel CPU family 6 Model 79 Model name Intel(R) Xeon(R) CPU @ 2.20GHz Stepping 0 CPU MHz 2199.998 BogoMIPS 4399.99 Hypervisor vendor KVM Virtualization type full L1d cache 32K L1i cache 32K L2 cache 256K L3 cache 56320K NUMA node0 CPU(s) 0-3


    INFO 2021-03-28 03:48:25,689 tensorboard.py: 46: Tensorboard dir: ./checkpoints/tb_logs

    awaiting-user-response 
    opened by Tylersuard 14
  • Error Importing 'TrainingMode' from 'torch.onnx'

    Error Importing 'TrainingMode' from 'torch.onnx'

    Instructions To Reproduce the 🐛 Bug:

    I am new to VISSL and trying to lean using the tutorial notebooks. When running "Understanding VISSL Training and YAML Config.ipynb" (the second tutorial notebook) in Google Colab the following line causes an error.

    1. Exact command run:
    from vissl.data.dataset_catalog import VisslDatasetCatalog
    
    1. This is the observed output (including full logs):
    ---------------------------------------------------------------------------
    ImportError                               Traceback (most recent call last)
    <ipython-input-8-b5479c1fc208> in <module>()
    ----> 1 from vissl.data.dataset_catalog import VisslDatasetCatalog
          2 
          3 # list all the datasets that exist in catalog
          4 print(VisslDatasetCatalog.list())
          5 
    
    11 frames
    /usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py in <module>()
         17 from torch._six import string_classes
         18 from torch.jit import _unique_state_dict
    ---> 19 from torch.onnx import ONNX_ARCHIVE_MODEL_PROTO_NAME, ExportTypes, OperatorExportTypes, TrainingMode
         20 from torch._C import ListType, OptionalType, _propagate_and_assign_input_shapes, _check_onnx_proto
         21 from typing import Union, Tuple, List
    
    ImportError: cannot import name 'TrainingMode' from 'torch.onnx' (/usr/local/lib/python3.7/dist-packages/torch/onnx/__init__.py)
    

    I have not changed any code in any way, I am just running the code boxes provided and getting this error. This is all on Google Colab.

    Environment:

    1. The environment information is as follows:
    -------------------  ---------------------------------------------------------------
    sys.platform         linux
    Python               3.7.11 (default, Jul  3 2021, 18:01:19) [GCC 7.5.0]
    numpy                1.19.5
    Pillow               7.1.2
    vissl                0.1.5 @/usr/local/lib/python3.7/dist-packages/vissl
    GPU available        True
    GPU 0                Tesla P100-PCIE-16GB
    CUDA_HOME            /usr/local/cuda
    torchvision          0.6.1+cu101 @/usr/local/lib/python3.7/dist-packages/torchvision
    hydra                1.1.0 @/usr/local/lib/python3.7/dist-packages/hydra
    classy_vision        0.6.0.dev @/usr/local/lib/python3.7/dist-packages/classy_vision
    tensorboard          1.15.0
    apex                 unknown
    cv2                  4.1.2
    PyTorch              1.9.0+cu102 @/usr/local/lib/python3.7/dist-packages/torch
    PyTorch debug build  False
    -------------------  ---------------------------------------------------------------
    PyTorch built with:
      - GCC 7.3
      - C++ Version: 201402
      - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
      - Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb)
      - OpenMP 201511 (a.k.a. OpenMP 4.5)
      - NNPACK is enabled
      - CPU capability usage: AVX2
      - CUDA Runtime 10.2
      - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70
      - CuDNN 7.6.5
      - Magma 2.5.2
      - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 
    
    CPU info:
    -------------------  ------------------------------
    Architecture         x86_64
    CPU op-mode(s)       32-bit, 64-bit
    Byte Order           Little Endian
    CPU(s)               2
    On-line CPU(s) list  0,1
    Thread(s) per core   2
    Core(s) per socket   1
    Socket(s)            1
    NUMA node(s)         1
    Vendor ID            GenuineIntel
    CPU family           6
    Model                63
    Model name           Intel(R) Xeon(R) CPU @ 2.30GHz
    Stepping             0
    CPU MHz              2299.998
    BogoMIPS             4599.99
    Hypervisor vendor    KVM
    Virtualization type  full
    L1d cache            32K
    L1i cache            32K
    L2 cache             256K
    L3 cache             46080K
    NUMA node0 CPU(s)    0,1
    -------------------  ------------------------------
    
    tutorial-outdated 
    opened by rcterrile 13
  • No module named 'classy_vision'

    No module named 'classy_vision'

    Tried to run an example code described here inside a running docker container built from ./docker/* and ended up in a ModuleNotFoundError.

    Instructions To Reproduce the 🐛 Bug:

    cd vissl/docker image=cu101 ./build_docker.sh docker run -it --shm-size=8gb --env="DISPLAY" vissl:1.0-cu101 python tools/run_distributed_engines.py config=pretrain/swav/swav_8node_resnet \ config.DISTRIBUTED.NUM_PROC_PER_NODE=1 config.DISTRIBUTED.NUM_NODES=1

    results in

    ** fvcore version of PathManager will be deprecated soon. ** ** Please migrate to the version in iopath repo. ** https://github.com/facebookresearch/iopath

    Traceback (most recent call last): File "tools/run_distributed_engines.py", line 13, in from vissl.utils.distributed_launcher import ( File "/home/vissluser/vissl/vissl/utils/distributed_launcher.py", line 15, in from vissl.data.dataset_catalog import get_data_files File "/home/vissluser/vissl/vissl/data/init.py", line 8, in from classy_vision.dataset import DataloaderAsyncGPUWrapper ModuleNotFoundError: No module named 'classy_vision'

    Environment:

    Ubuntu 20.04.

    awaiting-user-response 
    opened by antal-horvath 13
  • Potential confusion over licensing comment

    Potential confusion over licensing comment

    Hi Vissl-folk,

    I wanted to note some potential for confusion in one of the source files. In https://github.com/facebookresearch/vissl/blob/main/tools/object_detection_benchmark.py there is a comment giving 'Full credits' to a different Facebook project file ( https://github.com/facebookresearch/moco/blob/main/detection/train_net.py ). After the relicense of this project to MIT, this credits is a bit confusing as the other project remains under CC-NC.

    The other file has no authors other than Facebook, so I'm not suggesting a concern about the licensing, just the potential for confusion. It would be nice if this is something you're able to clear up in the source file.

    opened by hyandell 0
  • NPID + MoCoV2 weights are the same?

    NPID + MoCoV2 weights are the same?

    The weights from the download URLs for NPID and MoCoV2 appear to be the same. Perhaps a copying error?

    The code below may be run to demonstrate the equivalence:

    def get_vissl_model(weights_url):
        from torch.hub import load_state_dict_from_url
        weights = load_state_dict_from_url(weights_url, map_location = torch.device('cpu'))
        
        def replace_module_prefix(state_dict, prefix, replace_with = ''):
            return {(key.replace(prefix, replace_with, 1) if key.startswith(prefix) else key): val
                          for (key, val) in state_dict.items()}
    
        def convert_model_weights(model):
            if "classy_state_dict" in model.keys():
                model_trunk = model["classy_state_dict"]["base_model"]["model"]["trunk"]
            elif "model_state_dict" in model.keys():
                model_trunk = model["model_state_dict"]
            else:
                model_trunk = model
            return replace_module_prefix(model_trunk, "_feature_blocks.")
    
        converted_weights = convert_model_weights(weights)
        excess_weights = ['fc','projection', 'prototypes']
        converted_weights = {key:value for (key,value) in converted_weights.items()
                                 if not any([x in key for x in excess_weights])}
        
        if 'module' in next(iter(converted_weights)):
            converted_weights = {key.replace('module.',''):value for (key,value) in converted_weights.items()
                                 if 'fc' not in key}
            
        from torchvision.models import resnet50
        import torch.nn as nn
    
        class Identity(nn.Module):
            def __init__(self):
                super(Identity, self).__init__()
    
            def forward(self, x):
                return x
    
        model = resnet50()
        model.fc = Identity()
    
        model.load_state_dict(converted_weights)
        
        return model
    
    ### NPID 
    weights_url = 'https://dl.fbaipublicfiles.com/vissl/model_zoo/npid_1node_200ep_4kneg_npid_8gpu_resnet_23_07_20.9eb36512/model_final_checkpoint_phase199.torch'
    model = get_vissl_model(weights_url)
    print(model.parameters())[1:10,1,1,1])
    
    ### MoCoV2 
    weights_url = 'https://dl.fbaipublicfiles.com/vissl/model_zoo/moco_v2_1node_lr.03_step_b32_zero_init/model_final_checkpoint_phase199.torch'
    model = get_vissl_model(weights_url)
    print(model.parameters())[1:10,1,1,1])
    
    ### BarlowTwins to show the difference
    weights_url = 'https://dl.fbaipublicfiles.com/vissl/model_zoo/barlow_twins/barlow_twins_32gpus_4node_imagenet1k_1000ep_resnet50.torch'
    model = get_vissl_model(weights_url)
    print(model.parameters())[1:10,1,1,1])
    
    opened by ColinConwell 0
  • Is it possible to get full imagenet pretrained weights of MoCo v2 ?

    Is it possible to get full imagenet pretrained weights of MoCo v2 ?

    ❓ Is it possible to get full imagenet pretrained weights of MoCo v2 ?

    Hello, Thank you for this nice library.

    I am trying to get the entire weights of resnet50 pretrained with moco v2 on ImageNet dataset. That is, I need the two encoders, the head etc ... so that I can resume the training with moco.

    Is is possible to do this using the weights from the model zoo or somewhere else ?

    I downloaded the weights and it does not seem to contain the weights of the two encoders.

    opened by CharlieCheckpt 0
  • First-class support for timm models

    First-class support for timm models

    🚀 Feature

    Pytorch Image Models (aka timm) is a popular computer vision library. If VISSL supported timm models, it would be easy to combine SOTA model architectures from timm with SOTA SSL methods.

    Motivation & Examples

    timm makes it easy to use hundreds of different model architectures, all with a consistent API. If timm models were supported, it would enable VISSL users to experiment with architectures not currently implemented in torchvision. For users that already use timm, it would reduce the friction for adopting VISSL.

    One potential way to achieve this would be a reserved prefix for timm models:

    MODEL:
      TRUNK:
        NAME: TIMM-seresnext26t_32x4d
        PRETRAINED: False
    
    opened by crypdick 0
  • EMA does not work on fp16 and does not copy weights?

    EMA does not work on fp16 and does not copy weights?

    Hi,

    Looking at the implementation of ModelEmaV2, it seems that compared to timm the model only works on fp32 parameters? (see this line ) Does it mean that it will not work if I use AMP ?

    Furthermore, another difference with timm is that the ema_model is not copied (in timm copying is done here ). I am probably missing where the model is copied, can you point it to me please? (if the model is not copied then EMA simply corresponds to momentum)

    opened by YannDubs 0
Releases(v0.1.6)
D2LV: A Data-Driven and Local-Verification Approach for Image Copy Detection

Facebook AI Image Similarity Challenge: Matching Track —— Team: imgFp This is the source code of our 3rd place solution to matching track of Image Sim

16 Dec 25, 2022
Semi-SDP Semi-supervised parser for semantic dependency parsing.

Semi-SDP Semi-supervised parser for semantic dependency parsing. This repo contains the code used for the semi-supervised semantic dependency parser i

12 Sep 17, 2021
This is an implementation for the CVPR2020 paper "Learning Invariant Representation for Unsupervised Image Restoration"

Learning Invariant Representation for Unsupervised Image Restoration (CVPR 2020) Introduction This is an implementation for the paper "Learning Invari

GarField 88 Nov 07, 2022
NBEATSx: Neural basis expansion analysis with exogenous variables

NBEATSx: Neural basis expansion analysis with exogenous variables We extend the NBEATS model to incorporate exogenous factors. The resulting method, c

Cristian Challu 100 Dec 31, 2022
ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectives

Status: Under development (expect bug fixes and huge updates) ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectiv

37 Dec 28, 2022
Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"

Introduction This repository contains research code for the ACL 2021 paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual

AdapterHub 20 Aug 04, 2022
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

WenxueCui 7 Nov 07, 2022
My implementation of transformers related papers for computer vision in pytorch

vision_transformers This is my personnal repo to implement new transofrmers based and other computer vision DL models I am currenlty working without a

samsja 1 Nov 10, 2021
Simplified interface for TensorFlow (mimicking Scikit Learn) for Deep Learning

SkFlow has been moved to Tensorflow. SkFlow has been moved to http://github.com/tensorflow/tensorflow into contrib folder specifically located here. T

3.2k Dec 29, 2022
Official implementation of Rethinking Graph Neural Architecture Search from Message-passing (CVPR2021)

Rethinking Graph Neural Architecture Search from Message-passing Intro The GNAS can automatically learn better architecture with the optimal depth of

Shaofei Cai 48 Sep 30, 2022
A new play-and-plug method of controlling an existing generative model with conditioning attributes and their compositions.

Viz-It Data Visualizer Web-Application If I ask you where most of the data wrangler looses their time ? It is Data Overview and EDA. Presenting "Viz-I

NVIDIA Research Projects 66 Jan 01, 2023
An unofficial PyTorch implementation of a federated learning algorithm, FedAvg.

Federated Averaging (FedAvg) in PyTorch An unofficial implementation of FederatedAveraging (or FedAvg) algorithm proposed in the paper Communication-E

Seok-Ju Hahn 123 Jan 06, 2023
GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration

GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration Stefan Abi-Karam*, Yuqi He*, Rishov Sarkar*, Lakshmi Sathidevi, Zihang Qiao, Co

Sharc-Lab 19 Dec 15, 2022
This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation".

ObjProp Introduction This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Insta

Anirudh S Chakravarthy 6 May 03, 2022
Highway networks implemented in PyTorch.

PyTorch Highway Networks Highway networks implemented in PyTorch. Just the MNIST example from PyTorch hacked to work with Highway layers. Todo Make th

Conner Vercellino 56 Dec 14, 2022
Image transformations designed for Scene Text Recognition (STR) data augmentation. Published at ICCV 2021 Workshop on Interactive Labeling and Data Augmentation for Vision.

Data Augmentation for Scene Text Recognition (ICCV 2021 Workshop) (Pronounced as "strog") Paper Arxiv Why it matters? Scene Text Recognition (STR) req

Rowel Atienza 152 Dec 28, 2022
[Nature Machine Intelligence' 21] "Advancing COVID-19 Diagnosis with Privacy-Preserving Collaboration in Artificial Intelligence"

[UCADI] COVID-19 Diagnosis With Federated Learning Intro We developed a Federated Learning (FL) Framework for global researchers to collaboratively tr

HUST EIC AI-LAB 30 Dec 12, 2022
Lazy, a tool for running things in idle time

Lazy, a tool for running things in idle time Mostly used to stop CUDA ML model training from making my desktop unusable. Simply monitors keyboard/mous

N Shepperd 46 Nov 06, 2022
Resources for the Ki testnet challenge

Ki Testnet Challenge This repository hosts ki-testnet-challenge. A set of scripts and resources to be used for the Ki Testnet Challenge What is the te

Ki Foundation 23 Aug 08, 2022
PROJECT - Az Residential Real Estate Analysis

AZ RESIDENTIAL REAL ESTATE ANALYSIS -Decided on libraries to import. Includes pa

2 Jul 05, 2022