An open source implementation of CLIP.

Overview

OpenCLIP

Welcome to an open source implementation of OpenAI's CLIP (Contrastive Language-Image Pre-training).

The goal of this repository is to enable training models with contrastive image-text supervision, and to investigate their properties such as robustness to distribution shift. Our starting point is an implementation of CLIP that matches the accuracy of the original CLIP models when trained on the same dataset. Specifically, a ResNet-50 model trained with our codebase on OpenAI's 15 million image subset of YFCC achieves 32.7% top-1 accuracy on ImageNet. OpenAI's CLIP model reaches 31.3% when trained on the same subset of YFCC. For ease of experimentation, we also provide code for training on the 3 million images in the Conceptual Captions dataset, where a ResNet-50x4 trained with our codebase reaches 22.2% top-1 ImageNet accuracy.

As we describe in more detail below, CLIP models in a medium accuracy regime already allow us to draw conclusions about the robustness of larger CLIP models since the models follow reliable scaling laws.

This codebase is work in progress, and we invite all to contribute in making it more acessible and useful. In the future, we plan to add support for TPU training and release larger models. We hope this codebase facilitates and promotes further research in contrastive image-text learning.

Note that src/clip is a copy of OpenAI's official repository with minimal changes.

Data

Conceptual Captions

OpenCLIP reads a CSV file with two columns: a path to an image, and a text caption. The names of the columns are passed as an argument to main.py.

The script src/data/gather_cc.py will collect the Conceptual Captions images. First, download the Conceptual Captions URLs and then run the script from our repository:

python3 src/data/gather_cc.py path/to/Train_GCC-training.tsv path/to/Validation_GCC-1.1.0-Validation.tsv

Our training set contains 2.89M images, and our validation set contains 13K images.

YFCC and other datasets

In addition to specifying the training data via CSV files as mentioned above, our codebase also supports webdataset, which is recommended for larger scale datasets. The expected format is a series of .tar files. Each of these .tar files should contain two files for each training example, one for the image and one for the corresponding text. Both files should have the same name but different extensions. For instance, shard_001.tar could contain files such as abc.jpg and abc.txt. You can learn more about webdataset at https://github.com/webdataset/webdataset. We use .tar files with 1,000 data points each, which we create using tarp.

You can download the YFCC dataset from Multimedia Commons. Similar to OpenAI, we used a subset of YFCC to reach the aforementioned accuracy numbers. The indices of images in this subset are in OpenAI's CLIP repository.

Training CLIP

Install dependencies

conda env create -f environment.yml
source activate open_clip

Add directory to pythonpath:

cd open_clip
export PYTHONPATH="$PYTHONPATH:$PWD/src"

Sample running code:

nohup python -u src/training/main.py \
    --save-frequency 1 \
    --zeroshot-frequency 1 \
    --report-to tensorboard \
    --train-data="/path/to/train_data.csv"  \
    --val-data="/path/to/validation_data.csv"  \
    --csv-img-key filepath \
    --csv-caption-key title \
    --imagenet-val=/path/to/imagenet/root/val/ \
    --warmup 10000 \
    --batch-size=128 \
    --lr=1e-3 \
    --wd=0.1 \
    --epochs=30 \
    --workers=8 \
    --model RN50

Note: imagenet-val is the path to the validation set of ImageNet for zero-shot evaluation, not the training set! You can remove this argument if you do not want to perform zero-shot evaluation on ImageNet throughout training. Note that the val folder should contain subfolders. If it doest not, please use this script.

When run on a machine with 8 GPUs the command should produce the following training curve for Conceptual Captions:

CLIP zero shot training curve

More detailed curves for Conceptual Captions are given at /docs/clip_conceptual_captions.md.

When training a RN50 on YFCC the same hyperparameters as above are used, with the exception of lr=5e-4 and epochs=32.

Note that to use another model, like ViT-B/32 or RN50x4 or RN50x16 or ViT-B/16, specify with --model RN50x4.

Launch tensorboard:

tensorboard --logdir=logs/tensorboard/ --port=7777

Sample resuming from a checkpoint:

python src/training/main.py \
    --train-data="/path/to/train_data.csv" \
    --val-data="/path/to/validation_data.csv"  \
    --resume /path/to/checkpoints/epoch_K.pt

Sample evaluation only:

python src/training/main.py \
    --val-data="/path/to/validation_data.csv"  \
    --resume /path/to/checkpoints/epoch_K.pt

Trained models

You can find our ResNet-50 trained on YFCC-15M here.

Scaling trends

The plot below shows how zero-shot performance of CLIP models varies as we scale the number of samples used for training. Zero-shot performance increases steadily for both ImageNet and ImageNetV2, and is far from saturated at ~15M samples.

Why are low-accuracy CLIP models interesting?

TL;DR: CLIP models have high effective robustness, even at small scales.

CLIP models are particularly intriguing because they are more robust to natural distribution shifts (see Section 3.3 in the CLIP paper). This phenomena is illustrated by the figure below, with ImageNet accuracy on the x-axis and ImageNetV2 (a reproduction of the ImageNet validation set with distribution shift) accuracy on the y-axis. Standard training denotes training on the ImageNet train set and the CLIP zero-shot models are shown as stars.

CLIP scatter plot

As observed by Taori et al., 2020 and Miller et al., 2021, the in-distribution and out-of-distribution accuracies of models trained on ImageNet follow a predictable linear trend (the red line in the above plot). Effective robustness quantifies robustness as accuracy beyond this baseline, i.e., how far a model lies above the red line. Ideally a model would not suffer from distribution shift and fall on the y = x line (trained human labelers are within a percentage point of the y = x line).

Even though the CLIP models trained with this codebase achieve much lower accuracy than those trained by OpenAI, our models still lie on the same trend of improved effective robustness (the purple line). Therefore, we can study what makes CLIP robust without requiring industrial-scale compute.

For more more information on effective robustness, please see:

The Team

We are a group of researchers at UW, Google, Stanford, Amazon, Columbia, and Berkeley.

Gabriel Ilharco*, Mitchell Wortsman*, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, John Miller, Hongseok Namkoong, Hannaneh Hajishirzi, Ali Farhadi, Ludwig Schmidt

Special thanks to Jong Wook Kim and Alec Radford for help with reproducing CLIP!

Citing

If you found this repository useful, please consider citing:

@software{ilharco_gabriel_2021_5143773,
  author       = {Ilharco, Gabriel and
                  Wortsman, Mitchell and
                  Carlini, Nicholas and
                  Taori, Rohan and
                  Dave, Achal and
                  Shankar, Vaishaal and
                  Namkoong, Hongseok and
                  Miller, John and
                  Hajishirzi, Hannaneh and
                  Farhadi, Ali and
                  Schmidt, Ludwig},
  title        = {OpenCLIP},
  month        = jul,
  year         = 2021,
  note         = {If you use this software, please cite it as below.},
  publisher    = {Zenodo},
  version      = {0.1},
  doi          = {10.5281/zenodo.5143773},
  url          = {https://doi.org/10.5281/zenodo.5143773}
}
@inproceedings{Radford2021LearningTV,
  title={Learning Transferable Visual Models From Natural Language Supervision},
  author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
  booktitle={ICML},
  year={2021}
}

DOI

Comments
  • adding CoCa

    adding CoCa

    The PR idea is to add the CoCa model as implemented in https://github.com/lucidrains/CoCa-pytorch, using existing parts as much as possible.

    Ideally adding possibilty to choose between custom and non custom Attention implementation as is done for CLIP.

    opened by gpucce 76
  • Inference testing using random data

    Inference testing using random data

    I have started to work on integration tests for with random data. This test runs on all pretrained models at fp32 with JIT True/False where applicable.

    Related issue: #198

    • [x] Inference testing on pre-made input and gt
      • [x] all models as listed by list_models()
      • [x] Image
      • [x] Text
    • [x] Random test data generator
      • [x] Random image data in PIL format
      • [x] Random text data
    • [x] Determine best way to store and recall test data: ~4.6MB for all models with 1 sample per config
    • [ ] Parallelize tests, unlikely due to RAM constraints

    To generate test data:

    python tests/util_test.py --all
    

    populates the tests/data folder with one torch pt per model config, to be used by the test.

    opened by lopho 39
  • Text Tower Refactor

    Text Tower Refactor

    A refactor of #178 that will keep backwards compat for existing weights/models but provide a different base model for new models using custom text towers...

    opened by rwightman 22
  • How to obtain logits (and probabilities) for 0-shot classification of single classes

    How to obtain logits (and probabilities) for 0-shot classification of single classes

    First of all, thanks for the amazing work going into this repo! In the case where we want to return the probability of the presence of 1 class (e.g. "dog") in a set of images, how would we go about it? While (100.0 * image_features @ text_features.T).softmax(dim=-1) provides well-calibrated probabilities in the multi-class setting, (100.0 * image_features @ text_features.T).sigmoid() does not when we return the logits of only 1 class and have no other classes to compute the softmax against. From logits = np.dot(I_e, T_e.T) * np.exp(t) in Figure 3 of the CLIP paper, it would have to follow that t=4.6... given np.exp(t)=100 from the usage snippet in the README, is this correct (Edit: indeed model.logit_scale confirms this)? And wouldn't this be surprisingly consistent across architectures/training runs? I believe the OpenAI implementation initialises t=1/.07 leading to an initial scaling factor of approximately 14.29. This is then trained of course (link to code). Alternatively, could I try sampling a random, normalised vector as text_feature for a non-existing "non-dog" class and apply (100.0 * image_features @ text_features.T).softmax(dim=-1) as in the multi-class setting? Thanks

    opened by arnaudvl 21
  • add `generate` to coca model

    add `generate` to coca model

    This PR should add the generate method to the CoCa model to add support for generation

    based on https://github.com/lucidrains/x-transformers/blob/main/x_transformers/autoregressive_wrapper.py

    opened by gpucce 16
  • Get well adjusted confidence scores from similarity of CLIP encodings

    Get well adjusted confidence scores from similarity of CLIP encodings

    I am using CLIP to check similarity between text and an image. Now for example I have list of words (objects) I want to check against. For example (“elephant”, “tiger”, “giraffe”).

    By taking the dot product of the encodings I get the similarity value. To evaluate the “confidence” I take the softmax over the outputs and it works very well predicting which class is in the image. But it could happen that the classes are not mutually exclusive. In that case softmax doesn’t make sense. I tried to use sigmoid as it is used with multi-label classification, but it seems to give me values all around 0.55 (so classes that were correct around around 0.56 and classes that are wrong 0.54), so in the example (0.565, 0.55, 0.62) if elephant and giraffe are in the picture. Thus it is hard to set a threshold there.

    I would like to have something like (0.95, 0.05, 0.98) if elefant and giraffe are in the picture, thus the similarity is high for both words.

    Am I thinking too complicated and there is a standard way to do this? Is it even possible to get this well adjusted confidence score?

    opened by justlike-prog 15
  • Loss is constant

    Loss is constant

    I'm using CLIP to train on my custom dataset with the following params:

    Dataset size : 50k image-text pairs Batch size : 128 Image Size : 224 Gpus : 1 Epochs : 500

    It's been running for a while now, I'm on my 15th epoch, and the loss hasn't changed at all. It isn't a constant number, but its constantly at 4.8xxx. Should I be concerned? I'm not sure why this is happening.

    image

    opened by tarunn2799 14
  • AttributeError Open CLIP has no attribute

    AttributeError Open CLIP has no attribute "create_model"

    I am having issues loading CLIP models suddenly, without any change to our system almost 2 months. I am getting the following error:

    AttributeError                            Traceback (most recent call last)
    <ipython-input-4-145ac54c6c43> in <module>
        509 if RN101: print("Downloading CLIP Model: RN101");clip_models.append(clip.load('RN101', jit=False)[0].eval().requiresgrad(False).to(device))
        510 
    --> 511 if ViTB32_laion2b_e16: print("Downloading CLIP Model: ViT-B/32 laion2b_e16"); clip_models.append(open_clip.create_model('ViT-B-32', pretrained='laion2b_e16').eval().requiresgrad(False).to(device))
        512 if ViTB32_laion400m_e31: print("Downloading CLIP Model: ViT-B-32 laion400m_e31"); clip_models.append(open_clip.create_model('ViT-B-32', pretrained='laion400m_e31').eval().requiresgrad(False).to(device))
        513 if ViTB32_laion400m_32: print("Downloading CLIP Model: ViT-B/32 laion400m_e32"); clip_models.append(open_clip.create_model('ViT-B-32', pretrained='laion400m_e32').eval().requiresgrad(False).to(device))
    
    AttributeError: module 'open_clip' has no attribute 'create_model
    

    Has this been documented or seen before?

    To note, this seems to be happening randomly, like the whole module is crashing. One load of a CLIP model is fine, and the next it says there is no create_model method that it previously uses just fine.

    opened by WASasquatch 12
  • Add support for gradient accumulation.

    Add support for gradient accumulation.

    Added a new flag --accum-freq (accumulation frequency) which defaults to 1.

    If this is greater than 1, then the optimizer is only stepped every --accum-freq batches.

    Can be combined with gradient checkpointing.

    Feature was requested in case people only have a few gpus but want to train with large batch.

    We don't have to merge if people think it's makes things too complicated, and can instead close but point to upon request, but at least curious to hear thoughts.

    For per-gpu batch size of m and --acum-freq k the effective per-gpu batch size is mk.

    The basic psuedocode, when --accum-freq > 1 is:

    accum_data, accum_features = [], []
    for i, data in enumerate(dataloader):
      
      opt.zero_grad()
      
      # first, get the features for a bunch of batches without gradient tracking
      with no_grad:
        features = model(data)
      accum_data.append(data)
      accum_features.append(features)
      
      if (i + 1) % accum_freq > 0:
        continue
        
        
      # now re-compute the forward pass for the previous batches, with gradient tracking
      for j, data in enumerate(accum_data):
        features = model(data)
        all_features = cat(accum_features[:j], [features], accum_features[j+1:])
        loss = get_loss(all_features)
        loss.backward()
        
      optimizer.step()
      accum_data, accum_features = [], []
    
    opened by mitchellnw 11
  • Naming clash in new CLIP models

    Naming clash in new CLIP models

    I just cloned this repository on a Windows computer and saw the following:

    PS C:\Users\585491\documents\research> git clone https://github.com/mlfoundations/open_clip.git
    Cloning into 'open_clip'...
    remote: Enumerating objects: 1637, done.
    remote: Counting objects: 100% (74/74), done.
    remote: Compressing objects: 100% (53/53), done.
    remote: Total 1637 (delta 25), reused 49 (delta 17), pack-reused 1563
    Receiving objects: 100% (1637/1637), 8.06 MiB | 10.91 MiB/s, done.
    Resolving deltas: 100% (934/934), done.
    warning: the following paths have collided (e.g. case-sensitive paths
    on a case-insensitive filesystem) and only one from the same
    colliding group is in the working tree:
    
      'src/open_clip/model_configs/ViT-G-14.json'
      'src/open_clip/model_configs/ViT-g-14.json'
      'tests/data/output/ViT-G-14_None_fp32_random_image.pt'
      'tests/data/output/ViT-g-14_None_fp32_random_image.pt'
      'tests/data/output/ViT-G-14_None_fp32_random_text.pt'
      'tests/data/output/ViT-g-14_None_fp32_random_text.pt'
    

    It would be nice if the names could be adjusted to be compliant with case-insensitive file systems.

    opened by StellaAthena 11
  • Cannot reproduce your work

    Cannot reproduce your work

    Hi team, I am trying to reproduce the numbers of your model on the Conceptual Captions dataset and am not able to reproduce the same numbers.

    For example, I obtain a 0.0532 imagenet zeros shot top 1 val accuracy with RN50 config compares to the ~0.2 you report in the chapter of Loss Curves in README as follows. image

    I used dataset of cc12m but only about 3M data pair is got due to limited network. I used one node of 8 V100 GPUs with a batchsize of 150*8=1200 with default params(32 epoch)

    My command is : torchrun --nproc_per_node 8 --nnodes=1 --node_rank=0 -m training.main --train-data '/dataset/cc12m/{00000..00640}.tar' --dataset-type webdataset --batch-size 150 --precision amp --workers 4 --imagenet-val=/dataset/imagenet/val --model RN50 --train-num-samples 3056141 --report-to tensorboard

    Could you please help me to find what's wrong? Maybe it is the small dataset(3M data, 25% of cc12m)? or other hyperparams(like lr)?

    Could you share your other hyperparams you used to obtain those results so that I can run the training with the exact same setup, or better yet share the exact command you used to run the training runs specific to the reported results, I would really appreciate that.

    Thanks a lot.

    opened by CloudRR 9
  • Fix braceexpand memory explosion for complex urls

    Fix braceexpand memory explosion for complex urls

    Currently handling complex webdataset url patterns in args.train_data can lead to a unnecessary memory explosion, when using :: to concatenate multiple data sources. This can be correctly parsed by webdataset, using wds.shardlists.expand_urls(urls)).

    See Issue https://github.com/mlfoundations/open_clip/issues/278.

    opened by gabrielilharco 1
  • Resize embeddings and vocab

    Resize embeddings and vocab

    I finetuned text encoder of CLIP and added some additional tokens to it, I would like to know is there a way to load the checkpoint with greater embedding size?

    opened by ambiSk 0
  • Is there a way to do multi-label classification with CLIP?

    Is there a way to do multi-label classification with CLIP?

    The concrete use case is a as following. I have the classes baby, child, teen, adult. My idea was to use similarity between text and image features (for text features I used the prompt 'there is at least one (c) in the photo', c being one of the 4 classes).

    I went through quite a lot of examples, but I am running into the issue that the similarity scores are often very different for a fixed class or/and classes that appear might have a very similar threshold (like baby and child). For similarity scores I use the cosine similarity multiplied by 2.5 to stretch the score into the interval [0, 1] as is done in the CLIP Score paper.

    Setting a threshold in that sense doesn't seem possible.

    Does anyone have an idea for that? I feel quite stuck here, how I should proceed.

    opened by justlike-prog 1
  • Support for initializing image tower with pretrained weights

    Support for initializing image tower with pretrained weights

    Related to #332.

    I tried to keep the modificatons constrained to factory.py and the configuration to pass. I tested with full initialization with pretrained weights from openai and laion, and also only initializing the image tower accordingly. Review is definitely needed and appreciated.

    opened by Ja1Zhou 0
  • Best practice for supporting initialization from pretrained image tower with custom text tower?

    Best practice for supporting initialization from pretrained image tower with custom text tower?

    An example would be the case described in the Chinese-clip paper. If my understandings are correct, currently this is hard to achieve without downloading and merging separate copies of both towers with custom code. I would like to add this feature and I wonder if I could get some advice if I were to merge this feature into the main branch.

    opened by Ja1Zhou 1
  • Add TextTextCLIP

    Add TextTextCLIP

    This pull request adds TextTextCLIP (CLIP-like text-to-text contrastive retrieval model) to the main branch. It is still a work in progress.

    Tasks

    • [X] Add a config file for TextTextCLIP
    • [X] Add TextTextCLIP in model.py
    • [X] Modify factory.py to load model
    • [X] Modify data.py to load text data
    • [X] Modify `main.py' to train TextTextCLIP
    • [X] Test loading TextTextCLIP
    • [X] Test loading text-pair data.
    • [X] Test dummy training
    • [X] Rename variables
    opened by lingjzhu 0
Releases(v2.9.1)
This program will stylize your photos with fast neural style transfer.

Neural Style Transfer (NST) Using TensorFlow Demo TensorFlow TensorFlow is an end-to-end open source platform for machine learning. It has a comprehen

Ismail Boularbah 1 Aug 08, 2022
One-line your code easily but still with the fun of doing so!

One-liner-iser One-line your code easily but still with the fun of doing so! Have YOU ever wanted to write one-line Python code, but don't have the sa

5 May 04, 2022
PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021.

IBRNet: Learning Multi-View Image-Based Rendering PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021. IBRN

Google Interns 371 Jan 03, 2023
Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks

Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks This is the official code for DyReg model inroduced in Discovering Dyna

Bitdefender Machine Learning 11 Nov 08, 2022
Hand Gesture Volume Control is AIML based project which uses image processing to control the volume of your Computer.

Hand Gesture Volume Control Modules There are basically three modules Handtracking Program Handtracking Module Volume Control Program Handtracking Pro

VITTAL 1 Jan 12, 2022
ADSPM: Attribute-Driven Spontaneous Motion in Unpaired Image Translation

ADSPM: Attribute-Driven Spontaneous Motion in Unpaired Image Translation This repository provides a PyTorch implementation of ADSPM. Requirements Pyth

24 Jul 24, 2022
An implementation of MobileFormer

MobileFormer An implementation of MobileFormer proposed by Yinpeng Chen, Xiyang Dai et al. Including [1] Mobile-Former proposed in:

slwang9353 62 Dec 28, 2022
A heterogeneous entity-augmented academic language model based on Open Academic Graph (OAG)

Library | Paper | Slack We released two versions of OAG-BERT in CogDL package. OAG-BERT is a heterogeneous entity-augmented academic language model wh

THUDM 58 Dec 17, 2022
Ego4d dataset repository. Download the dataset, visualize, extract features & example usage of the dataset

Ego4D EGO4D is the world's largest egocentric (first person) video ML dataset and benchmark suite, with 3,600 hrs (and counting) of densely narrated v

Meta Research 118 Jan 07, 2023
Train Dense Passage Retriever (DPR) with a single GPU

Gradient Cached Dense Passage Retrieval Gradient Cached Dense Passage Retrieval (GC-DPR) - is an extension of the original DPR library. We introduce G

Luyu Gao 92 Jan 02, 2023
TorchGRL is the source code for our paper Graph Convolution-Based Deep Reinforcement Learning for Multi-Agent Decision-Making in Mixed Traffic Environments for IV 2022.

TorchGRL TorchGRL is the source code for our paper Graph Convolution-Based Deep Reinforcement Learning for Multi-Agent Decision-Making in Mixed Traffi

XXQQ 42 Dec 09, 2022
Interactive web apps created using geemap and streamlit

geemap-apps Introduction This repo demostrates how to build a multi-page Earth Engine App using streamlit and geemap. You can deploy the app on variou

Qiusheng Wu 27 Dec 23, 2022
Code repository for the paper: Hierarchical Kinematic Probability Distributions for 3D Human Shape and Pose Estimation from Images in the Wild (ICCV 2021)

Hierarchical Kinematic Probability Distributions for 3D Human Shape and Pose Estimation from Images in the Wild Akash Sengupta, Ignas Budvytis, Robert

Akash Sengupta 149 Dec 14, 2022
Simulation environments for the CrazyFlie quadrotor: Used for Reinforcement Learning and Sim-to-Real Transfer

Phoenix-Drone-Simulation An OpenAI Gym environment based on PyBullet for learning to control the CrazyFlie quadrotor: Can be used for Reinforcement Le

Sven Gronauer 8 Dec 07, 2022
PyTorch common framework to accelerate network implementation, training and validation

pytorch-framework PyTorch common framework to accelerate network implementation, training and validation. This framework is inspired by works from MML

Dongliang Cao 3 Dec 19, 2022
Laser device for neutralizing - mosquitoes, weeds and pests

Laser device for neutralizing - mosquitoes, weeds and pests (in progress) Here I will post information for creating a laser device. A warning!! How It

Ildaron 1k Jan 02, 2023
Code for CVPR2019 paper《Unequal Training for Deep Face Recognition with Long Tailed Noisy Data》

Unequal-Training-for-Deep-Face-Recognition-with-Long-Tailed-Noisy-Data. This is the code of CVPR 2019 paper《Unequal Training for Deep Face Recognition

Zhong Yaoyao 68 Jan 07, 2023
Data Augmentation with Variational Autoencoders

Documentation Pyraug This library provides a way to perform Data Augmentation using Variational Autoencoders in a reliable way even in challenging con

112 Nov 30, 2022
Implementation of the paper Scalable Intervention Target Estimation in Linear Models (NeurIPS 2021), and the code to generate simulation results.

Scalable Intervention Target Estimation in Linear Models Implementation of the paper Scalable Intervention Target Estimation in Linear Models (NeurIPS

0 Oct 25, 2021
Implementation of " SESS: Self-Ensembling Semi-Supervised 3D Object Detection" (CVPR2020 Oral)

SESS: Self-Ensembling Semi-Supervised 3D Object Detection Created by Na Zhao from National University of Singapore Introduction This repository contai

125 Dec 23, 2022