Official repository for the paper "Instance-Conditioned GAN"

Related tags

Deep Learningic_gan
Overview

IC-GAN: Instance-Conditioned GAN

Official Pytorch code of Instance-Conditioned GAN by Arantxa Casanova, Marlene Careil, Jakob Verbeek, Michał Drożdżal, Adriana Romero-Soriano. IC-GAN results

Generate images with IC-GAN in a Colab Notebook

We provide a Google Colab notebook to generate images with IC-GAN and its class-conditional counter part.

The figure below depicts two instances, unseen during training and downloaded from Creative Commons search, and the generated images with IC-GAN and class-conditional IC-GAN when conditioning on the class "castle":

Additionally, and inspired by this Colab, we provide the funcionality in the same Colab notebook to guide generations with text captions, using the CLIP model. As an example, the following Figure shows three instance conditionings and a text caption (top), followed by the resulting generated images with IC-GAN (bottom), when optimizing the noise vector following CLIP's gradient for 100 iterations.

Credit for the three instance conditionings, from left to right, that were modified with a resize and central crop: 1: "Landscape in Bavaria" by shining.darkness, licensed under CC BY 2.0, 2: "Fantasy Landscape - slolsss" by Douglas Tofoli is marked with CC PDM 1.0, 3: "How to Draw Landscapes Simply" by Kuwagata Keisai is marked with CC0 1.0

Requirements

  • Python 3.8
  • Cuda v10.2 / Cudnn v7.6.5
  • gcc v7.3.0
  • Pytorch 1.8.0
  • A conda environment can be created from environment.yaml by entering the command: conda env create -f environment.yml, that contains the aforemention version of Pytorch and other required packages.
  • Faiss: follow the instructions in the original repository.

Overview

This repository consists of four main folders:

  • data_utils: A common folder to obtain and format the data needed to train and test IC-GAN, agnostic of the specific backbone.
  • inference: Scripts to test the models both qualitatively and quantitatively.
  • BigGAN_PyTorch: It provides the training, evaluation and sampling scripts for IC-GAN with a BigGAN backbone. The code base comes from Pytorch BigGAN repository, made available under the MIT License. It has been modified to add additional utilities and it enables IC-GAN training on top of it.
  • stylegan2_ada_pytorch: It provides the training, evaluation and sampling scripts for IC-GAN with a StyleGAN2 backbone. The code base comes from StyleGAN2 Pytorch, made available under the Nvidia Source Code License. It has been modified to add additional utilities and it enables IC-GAN training on top of it.

(Python script) Generate images with IC-GAN

Alternatively, we can generate images with IC-GAN models directly from a python script, by following the next steps:

  1. Download the desired pretrained models (links below) and the pre-computed 1000 instance features from ImageNet and extract them into a folder pretrained_models_path.
model backbone class-conditional? training dataset resolution url
IC-GAN BigGAN No ImageNet 256x256 model
IC-GAN (half capacity) BigGAN No ImageNet 256x256 model
IC-GAN BigGAN No ImageNet 128x128 model
IC-GAN BigGAN No ImageNet 64x64 model
IC-GAN BigGAN Yes ImageNet 256x256 model
IC-GAN (half capacity) BigGAN Yes ImageNet 256x256 model
IC-GAN BigGAN Yes ImageNet 128x128 model
IC-GAN BigGAN Yes ImageNet 64x64 model
IC-GAN BigGAN Yes ImageNet-LT 256x256 model
IC-GAN BigGAN Yes ImageNet-LT 128x128 model
IC-GAN BigGAN Yes ImageNet-LT 64x64 model
IC-GAN BigGAN No COCO-Stuff 256x256 model
IC-GAN BigGAN No COCO-Stuff 128x128 model
IC-GAN StyleGAN2 No COCO-Stuff 256x256 model
IC-GAN StyleGAN2 No COCO-Stuff 128x128 model
  1. Execute:
python inference/generate_images.py --root_path [pretrained_models_path] --model [model] --model_backbone [backbone] --resolution [res]
  • model can be chosen from ["icgan", "cc_icgan"] to use the IC-GAN or the class-conditional IC-GAN model respectively.
  • backbone can be chosen from ["biggan", "stylegan2"].
  • res indicates the resolution at which the model has been trained. For ImageNet, choose one in [64, 128, 256], and for COCO-Stuff, one in [128, 256].

This script results in a .PNG file where several generated images are shown, given an instance feature (each row), and a sampled noise vector (each grid position).

Additional and optional parameters:

  • index: (None by default), is an integer from 0 to 999 that choses a specific instance feature vector out of the 1000 instances that have been selected with k-means on the ImageNet dataset and stored in pretrained_models_path/stored_instances.
  • swap_target: (None by default) is an integer from 0 to 999 indicating an ImageNet class label. This label will be used to condition the class-conditional IC-GAN, regardless of which instance features are being used.
  • which_dataset: (ImageNet by default) can be chosen from ["imagenet", "coco"] to indicate which dataset (training split) to sample the instances from.
  • trained_dataset: (ImageNet by default) can be chosen from ["imagenet", "coco"] to indicate the dataset in which the IC-GAN model has been trained on.
  • num_imgs_gen: (5 by default), it changes the number of noise vectors to sample per conditioning. Increasing this number results in a bigger .PNG file to save and load.
  • num_conditionings_gen: (5 by default), it changes the number of conditionings to sample. Increasing this number results in a bigger .PNG file to save and load.
  • z_var: (1.0 by default) controls the truncation factor for the generation.
  • Optionally, the script can be run with the following additional options --visualize_instance_images --dataset_path [dataset_path] to visualize the ground-truth images corresponding to the conditioning instance features, given a path to the dataset's ground-truth images dataset_path. Ground-truth instances will be plotted as the leftmost image for each row.

Data preparation

ImageNet
  1. Download dataset from here .
  2. Download SwAV feature extractor weights from here .
  3. Replace the paths in data_utils/prepare_data.sh: out_path by the path where hdf5 files will be stored, path_imnet by the path where ImageNet dataset is downloaded, and path_swav by the path where SwAV weights are stored.
  4. Execute ./data_utils/prepare_data.sh imagenet [resolution], where [resolution] can be an integer in {64,128,256}. This script will create several hdf5 files:
    • ILSVRC[resolution]_xy.hdf5 and ILSVRC[resolution]_val_xy.hdf5, where images and labels are stored for the training and validation set respectively.
    • ILSVRC[resolution]_feats_[feature_extractor]_resnet50.hdf5 that contains the instance features for each image.
    • ILSVRC[resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5 that contains the list of [k_nn] neighbors for each of the instance features.

ImageNet-LT
  1. Download ImageNet dataset from here . Following ImageNet-LT , the file ImageNet_LT_train.txt can be downloaded from this link and later stored in the folder ./BigGAN_PyTorch/imagenet_lt.
  2. Download the pre-trained weights of the ResNet on ImageNet-LT from this link, provided by the classifier-balancing repository .
  3. Replace the paths in data_utils/prepare_data.sh: out_path by the path where hdf5 files will be stored, path_imnet by the path where ImageNet dataset is downloaded, and path_classifier_lt by the path where the pre-trained ResNet50 weights are stored.
  4. Execute ./data_utils/prepare_data.sh imagenet_lt [resolution], where [resolution] can be an integer in {64,128,256}. This script will create several hdf5 files:
    • ILSVRC[resolution]longtail_xy.hdf5, where images and labels are stored for the training and validation set respectively.
    • ILSVRC[resolution]longtail_feats_[feature_extractor]_resnet50.hdf5 that contains the instance features for each image.
    • ILSVRC[resolution]longtail_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5 that contains the list of [k_nn] neighbors for each of the instance features.

COCO-Stuff
  1. Download the dataset following the LostGANs' repository instructions .
  2. Download SwAV feature extractor weights from here .
  3. Replace the paths in data_utils/prepare_data.sh: out_path by the path where hdf5 files will be stored, path_imnet by the path where ImageNet dataset is downloaded, and path_swav by the path where SwAV weights are stored.
  4. Execute ./data_utils/prepare_data.sh coco [resolution], where [resolution] can be an integer in {128,256}. This script will create several hdf5 files:
    • COCO[resolution]_xy.hdf5 and COCO[resolution]_val_test_xy.hdf5, where images and labels are stored for the training and evaluation set respectively.
    • COCO[resolution]_feats_[feature_extractor]_resnet50.hdf5 that contains the instance features for each image.
    • COCO[resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5 that contains the list of [k_nn] neighbors for each of the instance features.

Other datasets
  1. Download the corresponding dataset and store in a folder dataset_path.
  2. Download SwAV feature extractor weights from here .
  3. Replace the paths in data_utils/prepare_data.sh: out_path by the path where hdf5 files will be stored and path_swav by the path where SwAV weights are stored.
  4. Execute ./data_utils/prepare_data.sh [dataset_name] [resolution] [dataset_path], where [dataset_name] will be the dataset name, [resolution] can be an integer, for example 128 or 256, and dataset_path contains the dataset images. This script will create several hdf5 files:
    • [dataset_name][resolution]_xy.hdf5, where images and labels are stored for the training set.
    • [dataset_name][resolution]_feats_[feature_extractor]_resnet50.hdf5 that contains the instance features for each image.
    • [dataset_name][resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5 that contains the list of k_nn neighbors for each of the instance features.

How to subsample an instance feature dataset with k-means
To downsample the instance feature vector dataset, after we have prepared the data, we can use the k-means algorithm: python data_utils/store_kmeans_indexes.py --resolution [resolution] --which_dataset [dataset_name] --data_root [data_path]
  • Adding --gpu allows the faiss library to compute k-means leveraging GPUs, resulting in faster execution.
  • Adding the parameter --feature_extractor [feature_extractor] chooses which feature extractor to use, with feature_extractor in ['selfsupervised', 'classification'] , if we are using swAV as feature extactor or the ResNet pretrained on the classification task on ImageNet, respectively.
  • The number of k-means clusters can be set with --kmeans_subsampled [centers], where centers is an integer.

How to train the models

BigGAN or StyleGAN2 backbone

Training parameters are stored in JSON files in [backbone_folder]/config_files/[dataset]/*.json, where [backbone_folder] is either BigGAN_Pytorch or stylegan2_ada_pytorch and [dataset] can either be ImageNet, ImageNet-LT or COCO_Stuff.

cd BigGAN_PyTorch
python run.py --json_config config_files/
   
    /
    
     .json --data_root [data_root] --base_root [base_root]

    
   

or

cd stylegan_ada_pytorch
python run.py --json_config config_files/
   
    /
    
     .json --data_root [data_root] --base_root [base_root]

    
   

where:

  • data_root path where the data has been prepared and stored, following the previous section (Data preparation).
  • base_root path where to store the model weights and logs.

Note that one can create other JSON files to modify the training parameters.

Other backbones

To be able to run IC-GAN with other backbones, we provide some orientative steps:

  • Place the new backbone code in a new folder under ic_gan (ic_gan/new_backbone).
  • Modify the relevant piece of code in the GAN architecture to allow instance features as conditionings (for both generator and discriminator).
  • Create a trainer.py file with the training loop to train an IC-GAN with the new backbone. The data_utils folder provides the tools to prepare the dataset, load the data and conditioning sampling to train an IC-GAN. The IC-GAN with BigGAN backbone trainer.py file can be used as an inspiration.

How to test the models

To obtain the FID and IS metrics on ImageNet and ImageNet-LT:

  1. Execute:
python inference/test.py --json_config [BigGAN-PyTorch or stylegan-ada-pytorch]/config_files/
   
    /
    
     .json --num_inception_images [num_imgs] --sample_num_npz [num_imgs] --eval_reference_set [ref_set] --sample_npz --base_root [base_root] --data_root [data_root] --kmeans_subsampled [kmeans_centers] --model_backbone [backbone]

    
   

To obtain the tensorflow IS and FID metrics, use an environment with the Python <3.7 and Tensorflow 1.15. Then:

  1. Obtain Inception Scores and pre-computed FID moments:
python ../data_utils/inception_tf13.py --experiment_name [exp_name] --experiment_root [base_root] --kmeans_subsampled [kmeans_centers] 

For stratified FIDs in the ImageNet-LT dataset, the following parameters can be added --which_dataset 'imagenet_lt' --split 'val' --strat_name [stratified_split], where stratified_split can be in [few,low, many].

  1. (Only needed once) Pre-compute reference moments with tensorflow code:
python ../data_utils/inception_tf13.py --use_ground_truth_data --data_root [data_root] --split [ref_set] --resolution [res] --which_dataset [dataset]
  1. (Using this repository) FID can be computed using the pre-computed statistics obtained in 2) and the pre-computed ground-truth statistics obtain in 3). For example, to compute the FID with reference ImageNet validation set: python TTUR/fid.py [base_root]/[exp_name]/TF_pool_.npz [data_root]/imagenet_val_res[res]_tf_inception_moments_ground_truth.npz

To obtain the FID metric on COCO-Stuff:

  1. Obtain ground-truth jpeg images: python data_utils/store_coco_jpeg_images.py --resolution [res] --split [ref_set] --data_root [data_root] --out_path [gt_coco_images] --filter_hd [filter_hd]
  2. Store generated images as jpeg images: python sample.py --json_config ../[BigGAN-PyTorch or stylegan-ada-pytorch]/config_files/ / .json --data_root [data_root] --base_root [base_root] --sample_num_npz [num_imgs] --which_dataset 'coco' --eval_instance_set [ref_set] --eval_reference_set [ref_set] --filter_hd [filter_hd] --model_backbone [backbone]
  3. Using this repository, compute FID on the two folders of ground-truth and generated images.

where:

  • dataset: option to select the dataset in `['imagenet', 'imagenet_lt', 'coco']
  • exp_name: name of the experiment folder.
  • data_root: path where the data has been prepared and stored, following the previous section "Data preparation".
  • base_root: path where to find the model (for example, where the pretrained models have been downloaded).
  • num_imgs: needs to be set to 50000 for ImageNet and ImageNet-LT (with validation set as reference) and set to 11500 for ImageNet-LT (with training set as reference). For COCO-Stuff, set to 75777, 2050, 675, 1375 if using the training, evaluation, evaluation seen or evaluation unseen set as reference.
  • ref_set: set to 'val' for ImageNet, ImageNet-LT (and COCO) to obtain metrics with the validation (evaluation) set as reference, or set to 'train' for ImageNet-LT or COCO to obtain metrics with the training set as reference.
  • kmeans_centers: set to 1000 for ImageNet and to -1 for ImageNet-LT.
  • backbone: model backbone architecture in ['biggan','stylegan2'].
  • res: integer indicating the resolution of the images (64,128,256).
  • gt_coco_images: folder to store the ground-truth JPEG images of that specific split.
  • filter_hd: only valid for ref_set=val. If -1, use the entire evaluation set; if 0, use only conditionings and their ground-truth images with seen class combinations during training (eval seen); if 1, use only conditionings and their ground-truth images with unseen class combinations during training (eval unseen).

Utilities for GAN backbones

We change and provide extra utilities to facilitate the training, for both BigGAN and StyleGAN2 base repositories.

BigGAN change log

The following changes were made:

  • BigGAN architecture:

    • In train_fns.py: option to either have the optimizers inside the generator and discriminator class, or directly in the G_D wrapper module. Additionally, added an option to augment both generated and real images with augmentations from DiffAugment.
    • In BigGAN.py: added a function get_condition_embeddings to handle the conditioning separately.
    • Small modifications to layers.py to adapt the batchnorm function calls to the pytorch 1.8 version.
  • Training utilities:

    • Added trainer.py file (replacing train.py):
      • Training now allows the usage of DDP for faster single-node and multi-node training.
      • Training is performed by epochs instead of by iterations.
      • Option to stop the training by using early stopping or when experiments diverge.
    • In utils.py:
      • Replaced MultiEpochSampler for CheckpointedSampler to allow experiments to be resumable when using epochs and fixing a bug where MultiEpochSampler would require a long time to fetch data permutations when the number of epochs increased.
      • ImageNet-LT: Added option to use different class distributions when sampling a class label for the generator.
      • ImageNet-LT: Added class balancing (uniform and temperature annealed).
      • Added data augmentations from DiffAugment.
  • Testing utilities:

    • In calculate_inception_moments.py: added option to obtain moments for ImageNet-LT dataset, as well as stratified moments for many, medium and few-shot classes (stratified FID computation).
    • In inception_utils.py: added option to compute Precision, Recall, Density, Coverage and stratified FID.
  • Data utilities:

    • In datasets.py, added option to load ImageNet-LT dataset.
    • Added ImageNet-LT.txt files with image indexes for training and validation split.
    • In utils.py:
      • Separate functions to obtain the data from hdf5 files (get_dataset_hdf5) or from directory (get_dataset_images), as well as a function to obtain only the data loader (get_dataloader).
      • Added the function sample_conditionings to handle possible different conditionings to train G with.
  • Experiment utilities:

    • Added JSON files to launch experiments with the proposed hyper-parameter configuration.
    • Script to launch experiments with either the submitit tool or locally in the same machine (run.py).

StyleGAN2 change log

  • Multi-node DistributedDataParallel training.
  • Added early stopping based on the training FID metric.
  • Automatic checkpointing when jobs are automatically rescheduled on a cluster.
  • Option to load dataset from hdf5 file.
  • Replaced the usage of Click python package by an `ArgumentParser`.
  • Only saving best and last model weights.

Acknowledgements

We would like to thanks the authors of the Pytorch BigGAN repository and StyleGAN2 Pytorch, as our model requires their repositories to train IC-GAN with BigGAN or StyleGAN2 bakcbone respectively. Moreover, we would like to further thank the authors of generative-evaluation-prdc, data-efficient-gans, faiss and sg2im as some components were borrowed and modified from their code bases. Finally, we thank the author of WanderCLIP as well as the following repositories, that we use in our Colab notebook: pytorch-pretrained-BigGAN and CLIP.

License

The majority of IC-GAN is licensed under CC-BY-NC, however portions of the project are available under separate license terms: BigGAN and PRDC are licensed under the MIT license; COCO-Stuff loader is licensed under Apache License 2.0; DiffAugment is licensed under BSD 2-Clause Simplified license; StyleGAN2 is licensed under a NVIDIA license, available here: https://github.com/NVlabs/stylegan2-ada-pytorch/blob/main/LICENSE.txt. In the Colab notebook, CLIP and pytorch-pretrained-BigGAN code is used, both licensed under the MIT license.

Disclaimers

THE DIFFAUGMENT SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

THE CLIP SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

THE PYTORCH-PRETRAINED-BIGGAN SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Cite the paper

If this repository, the paper or any of its content is useful for your research, please cite:

@misc{casanova2021instanceconditioned,
      title={Instance-Conditioned GAN}, 
      author={Arantxa Casanova and Marlène Careil and Jakob Verbeek and Michal Drozdzal and Adriana Romero-Soriano},
      year={2021},
      eprint={2109.05070},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Owner
Facebook Research
Facebook Research
Monify: an Expense tracker Program implemented in a Graphical User Interface that allows users to keep track of their expenses

💳 MONIFY (EXPENSE TRACKER PRO) 💳 Description Monify is an Expense tracker Program implemented in a Graphical User Interface allows users to add inco

Moyosore Weke 1 Dec 14, 2021
[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting

[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting [Paper] [Project Website] [Google Colab] We propose a method for converting a

Virginia Tech Vision and Learning Lab 6.2k Jan 01, 2023
Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM)

Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM) Introduction The average lifetime of the $D^{0}$ me

Son Gyo Jung 1 Dec 17, 2021
MLOps will help you to understand how to build a Continuous Integration and Continuous Delivery pipeline for an ML/AI project.

page_type languages products description sample python azure azure-machine-learning-service azure-devops Code which demonstrates how to set up and ope

1 Nov 01, 2021
Differentiable Factor Graph Optimization for Learning Smoothers @ IROS 2021

Differentiable Factor Graph Optimization for Learning Smoothers Overview Status Setup Datasets Training Evaluation Acknowledgements Overview Code rele

Brent Yi 60 Nov 14, 2022
Reinforcement Learning for finance

Reinforcement Learning for Finance We apply reinforcement learning for stock trading. Fetch Data Example import utils # fetch symbols from yahoo fina

Tomoaki Fujii 159 Jan 03, 2023
CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning

CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning This repository contains the code and relevant instructions

XiaoMing 5 Aug 19, 2022
PyTorch wrapper for Taichi data-oriented class

Stannum PyTorch wrapper for Taichi data-oriented class PRs are welcomed, please see TODOs. Usage from stannum import Tin import torch data_oriented =

86 Dec 23, 2022
A python software that can help blind people find things like laptops, phones, etc the same way a guide dog guides a blind person in finding his way.

GuidEye A python software that can help blind people find things like laptops, phones, etc the same way a guide dog guides a blind person in finding h

Munal Jain 0 Aug 09, 2022
Demo notebooks for Qiskit application modules demo sessions (Oct 8 & 15):

qiskit-application-modules-demo-sessions This repo hosts demo notebooks for the Qiskit application modules demo sessions hosted on Qiskit YouTube. Par

Qiskit Community 46 Nov 24, 2022
Reinforcement learning models in ViZDoom environment

DoomNet DoomNet is a ViZDoom agent trained by reinforcement learning. The agent is a neural network that outputs a probability of actions given only p

Andrey Kolishchak 126 Dec 09, 2022
Submanifold sparse convolutional networks

Submanifold Sparse Convolutional Networks This is the PyTorch library for training Submanifold Sparse Convolutional Networks. Spatial sparsity This li

Facebook Research 1.8k Jan 06, 2023
Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank

This repository provides the official code for replicating experiments from the paper: Semi-Supervised Semantic Segmentation with Pixel-Level Contrast

Iñigo Alonso Ruiz 58 Dec 15, 2022
这是一个yolox-pytorch的源码,可以用于训练自己的模型。

YOLOX:You Only Look Once目标检测模型在Pytorch当中的实现 目录 性能情况 Performance 实现的内容 Achievement 所需环境 Environment 小技巧的设置 TricksSet 文件下载 Download 训练步骤 How2train 预测步骤

Bubbliiiing 613 Jan 05, 2023
A coin flip game in which you can put the amount of money below or equal to 1000 and then choose heads or tail

COIN_FLIPPY ##This is a simple example package. You can use Github-flavored Markdown to write your content. Coinflippy A coin flip game in which you c

2 Dec 26, 2021
Official codebase for "B-Pref: Benchmarking Preference-BasedReinforcement Learning" contains scripts to reproduce experiments.

B-Pref Official codebase for B-Pref: Benchmarking Preference-BasedReinforcement Learning contains scripts to reproduce experiments. Install conda env

48 Dec 20, 2022
Visualize Camera's Pose Using Extrinsic Parameter by Plotting Pyramid Model on 3D Space

extrinsic2pyramid Visualize Camera's Pose Using Extrinsic Parameter by Plotting Pyramid Model on 3D Space Intro A very simple and straightforward modu

JEONG HYEONJIN 106 Dec 28, 2022
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano https:

9.6k Dec 31, 2022
For encoding a text longer than 512 tokens, for example 800. Set max_pos to 800 during both preprocessing and training.

LongScientificFormer For encoding a text longer than 512 tokens, for example 800. Set max_pos to 800 during both preprocessing and training. Some code

Athar Sefid 6 Nov 02, 2022
Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR

Official implementation for paper "Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR"

Ziyue Feng 72 Dec 09, 2022