Torchreid: Deep learning person re-identification in PyTorch.

Overview

Torchreid

Torchreid is a library for deep-learning person re-identification, written in PyTorch.

It features:

  • multi-GPU training
  • support both image- and video-reid
  • end-to-end training and evaluation
  • incredibly easy preparation of reid datasets
  • multi-dataset training
  • cross-dataset evaluation
  • standard protocol used by most research papers
  • highly extensible (easy to add models, datasets, training methods, etc.)
  • implementations of state-of-the-art deep reid models
  • access to pretrained reid models
  • advanced training techniques
  • visualization tools (tensorboard, ranks, etc.)

Code: https://github.com/KaiyangZhou/deep-person-reid.

Documentation: https://kaiyangzhou.github.io/deep-person-reid/.

How-to instructions: https://kaiyangzhou.github.io/deep-person-reid/user_guide.

Model zoo: https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO.

Tech report: https://arxiv.org/abs/1910.10093.

You can find some research projects that are built on top of Torchreid here.

What's new

  • [Apr 2021] We have updated the appendix in the TPAMI version of OSNet to include results in the multi-source domain generalization setting. The trained models can be found in the Model Zoo.
  • [Apr 2021] We have added a script to automate the process of calculating average results over multiple splits. For more details please see tools/parse_test_res.py.
  • [Apr 2021] v1.4.0: We added the person search dataset, CUHK-SYSU. Please see the documentation regarding how to download the dataset (it contains cropped person images).
  • [Apr 2021] All models in the model zoo have been moved to google drive. Please raise an issue if any model's performance is inconsistent with the numbers shown in the model zoo page (could be caused by wrong links).
  • [Mar 2021] OSNet will appear in the TPAMI journal! Compared with the conference version, which focuses on discriminative feature learning using the omni-scale building block, this journal extension further considers generalizable feature learning by integrating instance normalization layers with the OSNet architecture. We hope this journal paper can motivate more future work to taclke the generalization issue in cross-dataset re-ID.
  • [Mar 2021] Generalization across domains (datasets) in person re-ID is crucial in real-world applications, which is closely related to the topic of domain generalization. Interested in learning how the field of domain generalization has developed over the last decade? Check our recent survey in this topic at https://arxiv.org/abs/2103.02503, with coverage on the history, datasets, related problems, methodologies, potential directions, and so on (methods designed for generalizable re-ID are also covered!).
  • [Feb 2021] v1.3.6 Added University-1652, a new dataset for multi-view multi-source geo-localization (credit to Zhedong Zheng).
  • [Feb 2021] v1.3.5: Now the cython code works on Windows (credit to lablabla).
  • [Jan 2021] Our recent work, MixStyle (mixing instance-level feature statistics of samples of different domains for improving domain generalization), has been accepted to ICLR'21. The code has been released at https://github.com/KaiyangZhou/mixstyle-release where the person re-ID part is based on Torchreid.
  • [Jan 2021] A new evaluation metric called mean Inverse Negative Penalty (mINP) for person re-ID has been introduced in Deep Learning for Person Re-identification: A Survey and Outlook (TPAMI 2021). Their code can be accessed at https://github.com/mangye16/ReID-Survey.
  • [Aug 2020] v1.3.3: Fixed bug in visrank (caused by not unpacking dsetid).
  • [Aug 2020] v1.3.2: Added _junk_pids to grid and prid. This avoids using mislabeled gallery images for training when setting combineall=True.
  • [Aug 2020] v1.3.0: (1) Added dsetid to the existing 3-tuple data source, resulting in (impath, pid, camid, dsetid). This variable denotes the dataset ID and is useful when combining multiple datasets for training (as a dataset indicator). E.g., when combining market1501 and cuhk03, the former will be assigned dsetid=0 while the latter will be assigned dsetid=1. (2) Added RandomDatasetSampler. Analogous to RandomDomainSampler, RandomDatasetSampler samples a certain number of images (batch_size // num_datasets) from each of specified datasets (the amount is determined by num_datasets).
  • [Aug 2020] v1.2.6: Added RandomDomainSampler (it samples num_cams cameras each with batch_size // num_cams images to form a mini-batch).
  • [Jun 2020] v1.2.5: (1) Dataloader's output from __getitem__ has been changed from list to dict. Previously, an element, e.g. image tensor, was fetched with imgs=data[0]. Now it should be obtained by imgs=data['img']. See this commit for detailed changes. (2) Added k_tfm as an option to image data loader, which allows data augmentation to be applied k_tfm times independently to an image. If k_tfm > 1, imgs=data['img'] returns a list with k_tfm image tensors.
  • [May 2020] Added the person attribute recognition code used in Omni-Scale Feature Learning for Person Re-Identification (ICCV'19). See projects/attribute_recognition/.
  • [May 2020] v1.2.1: Added a simple API for feature extraction (torchreid/utils/feature_extractor.py). See the documentation for the instruction.
  • [Apr 2020] Code for reproducing the experiments of deep mutual learning in the OSNet paper (Supp. B) has been released at projects/DML.
  • [Apr 2020] Upgraded to v1.2.0. The engine class has been made more model-agnostic to improve extensibility. See Engine and ImageSoftmaxEngine for more details. Credit to Dassl.pytorch.
  • [Dec 2019] Our OSNet paper has been updated, with additional experiments (in section B of the supplementary) showing some useful techniques for improving OSNet's performance in practice.
  • [Nov 2019] ImageDataManager can load training data from target datasets by setting load_train_targets=True, and the train-loader can be accessed with train_loader_t = datamanager.train_loader_t. This feature is useful for domain adaptation research.

Installation

Make sure conda is installed.

# cd to your preferred directory and clone this repo
git clone https://github.com/KaiyangZhou/deep-person-reid.git

# create environment
cd deep-person-reid/
conda create --name torchreid python=3.7
conda activate torchreid

# install dependencies
# make sure `which python` and `which pip` point to the correct path
pip install -r requirements.txt

# install torch and torchvision (select the proper cuda version to suit your machine)
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch

# install torchreid (don't need to re-build it if you modify the source code)
python setup.py develop

Get started: 30 seconds to Torchreid

  1. Import torchreid
import torchreid
  1. Load data manager
datamanager = torchreid.data.ImageDataManager(
    root='reid-data',
    sources='market1501',
    targets='market1501',
    height=256,
    width=128,
    batch_size_train=32,
    batch_size_test=100,
    transforms=['random_flip', 'random_crop']
)

3 Build model, optimizer and lr_scheduler

model = torchreid.models.build_model(
    name='resnet50',
    num_classes=datamanager.num_train_pids,
    loss='softmax',
    pretrained=True
)

model = model.cuda()

optimizer = torchreid.optim.build_optimizer(
    model,
    optim='adam',
    lr=0.0003
)

scheduler = torchreid.optim.build_lr_scheduler(
    optimizer,
    lr_scheduler='single_step',
    stepsize=20
)
  1. Build engine
engine = torchreid.engine.ImageSoftmaxEngine(
    datamanager,
    model,
    optimizer=optimizer,
    scheduler=scheduler,
    label_smooth=True
)
  1. Run training and test
engine.run(
    save_dir='log/resnet50',
    max_epoch=60,
    eval_freq=10,
    print_freq=10,
    test_only=False
)

A unified interface

In "deep-person-reid/scripts/", we provide a unified interface to train and test a model. See "scripts/main.py" and "scripts/default_config.py" for more details. The folder "configs/" contains some predefined configs which you can use as a starting point.

Below we provide an example to train and test OSNet (Zhou et al. ICCV'19). Assume PATH_TO_DATA is the directory containing reid datasets. The environmental variable CUDA_VISIBLE_DEVICES is omitted, which you need to specify if you have a pool of gpus and want to use a specific set of them.

Conventional setting

To train OSNet on Market1501, do

python scripts/main.py \
--config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad_cosine.yaml \
--transforms random_flip random_erase \
--root $PATH_TO_DATA

The config file sets Market1501 as the default dataset. If you wanna use DukeMTMC-reID, do

python scripts/main.py \
--config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad_cosine.yaml \
-s dukemtmcreid \
-t dukemtmcreid \
--transforms random_flip random_erase \
--root $PATH_TO_DATA \
data.save_dir log/osnet_x1_0_dukemtmcreid_softmax_cosinelr

The code will automatically (download and) load the ImageNet pretrained weights. After the training is done, the model will be saved as "log/osnet_x1_0_market1501_softmax_cosinelr/model.pth.tar-250". Under the same folder, you can find the tensorboard file. To visualize the learning curves using tensorboard, you can run tensorboard --logdir=log/osnet_x1_0_market1501_softmax_cosinelr in the terminal and visit http://localhost:6006/ in your web browser.

Evaluation is automatically performed at the end of training. To run the test again using the trained model, do

python scripts/main.py \
--config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad_cosine.yaml \
--root $PATH_TO_DATA \
model.load_weights log/osnet_x1_0_market1501_softmax_cosinelr/model.pth.tar-250 \
test.evaluate True

Cross-domain setting

Suppose you wanna train OSNet on DukeMTMC-reID and test its performance on Market1501, you can do

python scripts/main.py \
--config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad.yaml \
-s dukemtmcreid \
-t market1501 \
--transforms random_flip color_jitter \
--root $PATH_TO_DATA

Here we only test the cross-domain performance. However, if you also want to test the performance on the source dataset, i.e. DukeMTMC-reID, you can set -t dukemtmcreid market1501, which will evaluate the model on the two datasets separately.

Different from the same-domain setting, here we replace random_erase with color_jitter. This can improve the generalization performance on the unseen target dataset.

Pretrained models are available in the Model Zoo.

Datasets

Image-reid datasets

Geo-localization datasets

Video-reid datasets

Models

ImageNet classification models

Lightweight models

ReID-specific models

Useful links

Citation

If you use this code or the models in your research, please give credit to the following papers:

@article{torchreid,
  title={Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch},
  author={Zhou, Kaiyang and Xiang, Tao},
  journal={arXiv preprint arXiv:1910.10093},
  year={2019}
}

@inproceedings{zhou2019osnet,
  title={Omni-Scale Feature Learning for Person Re-Identification},
  author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
  booktitle={ICCV},
  year={2019}
}

@article{zhou2021osnet,
  title={Learning Generalisable Omni-Scale Representations for Person Re-Identification},
  author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
  journal={TPAMI},
  year={2021}
}
Comments
  • Proper way to export the model to onnx

    Proper way to export the model to onnx

    import torchreid
    
    torchreid.models.show_avai_models()
    
    model = torchreid.models.build_model(name='osnet_ain_x1_0', num_classes=1000)
    
    torchreid.utils.load_pretrained_weights(model, "osnet_ain_x1_0_msmt17_256x128_amsgrad_ep50_lr0.0015_coslr_b64_fb10_softmax_labsmth_flip_jitter.pth") 
    
    from torch.autograd import Variable
    import torch
    import onnx
    
    input_name = ['input']
    output_name = ['output']
    input = Variable(torch.randn(1, 3, 256, 128))
    torch.onnx.export(model, input, 'osnet_ain_x1_0.onnx', input_names=input_name,output_names=output_name, verbose=True, export_params=True)
    
    The model after convert only 10633KBytes, while the pytorch model got 16888KBytes
    
    onnx_model = onnx.load("osnet_ain_x1_0.onnx")
    onnx.checker.check_model(onnx_model)
    
    
    1. The output messages seems all fine, but is this the correct way?
    2. What is the num_classes I should set?
    3. Am I using correct input_name and output_name

    Model before convert is 16888KB, model after convert only got 10633KB

    opened by stereomatchingkiss 19
  • Some confusion on mAP of CUKH03 in eval_metrics.py

    Some confusion on mAP of CUKH03 in eval_metrics.py

        cmc, AP = 0., 0.
        for repeat_idx in range(num_repeats):
            mask = np.zeros(len(raw_cmc), dtype=np.bool)
            for _, idxs in g_pids_dict.items():
                # randomly sample one image for each gallery person
                rnd_idx = np.random.choice(idxs)
                mask[rnd_idx] = True
            masked_raw_cmc = raw_cmc[mask]
            _cmc = masked_raw_cmc.cumsum()
            _cmc[_cmc > 1] = 1
            cmc += _cmc[:max_rank].astype(np.float32)
            # compute AP
            num_rel = masked_raw_cmc.sum()
            tmp_cmc = masked_raw_cmc.cumsum()
            tmp_cmc = [x / (i+1.) for i, x in enumerate(tmp_cmc)]
            tmp_cmc = np.asarray(tmp_cmc) * masked_raw_cmc
            AP += tmp_cmc.sum() / num_rel
    

    The author calculates the mAP of CUKH03 using single_gallery_shot. In my opinoin, this is superfluous and only necessary for cmc. The mAP results calculated in this way are different from others, for instance open-reid. https://github.com/Cysu/open-reid/blob/master/reid/evaluation_metrics/ranking.py

    opened by kalilili 14
  • Cython problem

    Cython problem

    I installed TorchReID on another computer (win10, anaconda3) today with a new installation method (option 2) using conda.

    After that, the message "rank.py:17: UserWarning: Cython evaluation (very fast) in unavailable, now use python evaluation" keeps appearing. I erased Cython and reinstalled it with the latest version, but it is still intact.

    My Cython version is 0.29.7. Is not it compatible with the latest version of Cython?

    opened by raretomato 13
  • how to cite your result

    how to cite your result

    @KaiyangZhou I want to write my paper , and cite some of your conclusions.

    KaiyangZhou.Pytorch implementation of deep person re-identification models[EB/OL].[2018-12].https://github.com/KaiyangZhou/deep-person-reid

    Is this form correct?

    opened by ciwei123 13
  • MSMT17 dataset version updated?

    MSMT17 dataset version updated?

    @KaiyangZhou Thanks for your implementation to advance further research in Re-ID. I noticed that the MSMT17 dataset has been upgraded to version 2 on the official website. Would you consider updating the relevant data loader in your code?

    new_feature 
    opened by d-li14 13
  • How to extract features ?

    How to extract features ?

    Hi,

    first thank you for your work.

    I would like to know how to extract features for each images for :

    • training
    • testing
    • query

    please ?

    If it can be in a .pickle file in a OredredDict format should be great. Example for a resnet50 : dict({img01 : [array of 2048]}, {img02 : [array of 2048]}, ...)

    Thank you.

    EDIT : In engine.py we have access to features but not sure how to use them

    opened by djidje 12
  • Using OSNet for Pedestrian Attribute Recognition

    Using OSNet for Pedestrian Attribute Recognition

    Hi, I was trying to replicate OSNet for Pedestrian Attribute Recognition on PA-100K dataset as mentioned in the paper. I added a Classifier on top of the base OSNet (2 FC layers, input dims=512 and output dims=number of classes in PA100k), and trained (from scratch) with the values mentioned in the paper, but I am getting F1 Score of around only 50. Is there any way I can improvise? Also, It'll be really helpful if the code used for training on PA-100K is released.

    opened by akashmanna 11
  • Triplet loss resulting in 0.2% mAP

    Triplet loss resulting in 0.2% mAP

    I've added the dataset im working with (PRAI-1581) and want to use tripplet loss for training but the results make no sense. I've already used softmax loss with OSNET and achieved 47,7% mAP and I've used ResNet50 with softmax which achieved 36,4% mAP. Therefore I can presume that the problem is not a faulty implementation of my dataset.

    But when i run the following config with triplet loss, I will only get 0.2% mAP and I hope you can help me find out how I am using torchreid wrong. I will attach an image to show that by training my model it actually just gets worse over time: Anmerkung 2020-07-23 175111

    Also after about the 20th epoch the loss value just stays around 0.3 which reminds me of the cfg.loss.triplet.margin = 0.3 value. Does that make sense?

    Show configuration
    adam:
      beta1: 0.9
      beta2: 0.999
    cuhk03:
      classic_split: False
      labeled_images: False
      use_metric_cuhk03: False
    data:
      combineall: False
      height: 256
      load_train_targets: False
      norm_mean: [0.485, 0.456, 0.406]
      norm_std: [0.229, 0.224, 0.225]
      root: /net/merkur/storage/deeplearning/datasets/reid
      save_dir: /net/merkur/storage/deeplearning/users/morlen/log/resnet50_prai1581_triplet_cosinelr6
      sources: ['prai1581']
      split_id: 0
      targets: ['prai1581']
      transforms: ['random_flip', 'random_erase', 'random_crop']
      type: image
      width: 128
      workers: 4
    loss:
      name: triplet
      softmax:
        label_smooth: True
      triplet:
        margin: 0.3
        weight_t: 1.0
        weight_x: 0.0
    market1501:
      use_500k_distractors: False
    model:
      load_weights: 
      name: resnet50
      pretrained: True
      resume: 
    rmsprop:
      alpha: 0.99
    sampler:
      num_instances: 4
      train_sampler: RandomIdentitySampler
      train_sampler_t: RandomIdentitySampler
    sgd:
      dampening: 0.0
      momentum: 0.9
      nesterov: False
    test:
      batch_size: 450
      dist_metric: euclidean
      eval_freq: 20
      evaluate: False
      normalize_feature: True
      ranks: [1, 5, 10, 20]
      rerank: False
      start_eval: 0
      visrank: False
      visrank_topk: 10
    train:
      base_lr_mult: 0.1
      batch_size: 104
      fixbase_epoch: 10
      gamma: 0.1
      lr: 0.002
      lr_scheduler: cosine
      max_epoch: 125
      new_layers: ['classifier']
      open_layers: ['classifier']
      optim: amsgrad
      print_freq: 20
      seed: 1
      staged_lr: False
      start_epoch: 0
      stepsize: [20]
      weight_decay: 0.0005
    use_gpu: True
    video:
      pooling_method: avg
      sample_method: evenly
      seq_len: 15
    

    I will add some output here too:

    => Start training
    * Only train ['classifier'] (epoch: 1/10)
    epoch: [1/125][20/176]	time 0.148 (0.313)	data 0.000 (0.092)	eta 1:54:45	loss_t 5.6625 (6.1717)	loss_x 6.6618 (6.6707)	acc 0.0000 (0.0962)	lr 0.002000
    epoch: [1/125][40/176]	time 0.136 (0.226)	data 0.000 (0.046)	eta 1:22:35	loss_t 6.0268 (6.1738)	loss_x 6.6604 (6.6653)	acc 0.0000 (0.0721)	lr 0.002000
    epoch: [1/125][60/176]	time 0.137 (0.197)	data 0.000 (0.031)	eta 1:12:03	loss_t 6.0984 (6.1199)	loss_x 6.6612 (6.6637)	acc 0.0000 (0.0641)	lr 0.002000
    epoch: [1/125][80/176]	time 0.136 (0.183)	data 0.000 (0.023)	eta 1:06:50	loss_t 5.5100 (6.1143)	loss_x 6.6605 (6.6629)	acc 3.8462 (0.1082)	lr 0.002000
    epoch: [1/125][100/176]	time 0.140 (0.174)	data 0.000 (0.019)	eta 1:03:41	loss_t 6.0706 (6.0967)	loss_x 6.6606 (6.6624)	acc 0.0000 (0.1250)	lr 0.002000
    epoch: [1/125][120/176]	time 0.139 (0.170)	data 0.000 (0.015)	eta 1:01:54	loss_t 6.3942 (6.1170)	loss_x 6.6606 (6.6621)	acc 0.0000 (0.1522)	lr 0.002000
    epoch: [1/125][140/176]	time 0.166 (0.166)	data 0.000 (0.013)	eta 1:00:31	loss_t 6.7135 (6.0950)	loss_x 6.6605 (6.6619)	acc 0.0000 (0.1717)	lr 0.002000
    epoch: [1/125][160/176]	time 0.153 (0.164)	data 0.000 (0.012)	eta 0:59:35	loss_t 5.6027 (6.1078)	loss_x 6.6606 (6.6617)	acc 0.0000 (0.1562)	lr 0.002000
    ...
    epoch: [16/125][20/176]	time 0.448 (0.489)	data 0.000 (0.027)	eta 2:37:42	loss_t 0.3140 (0.3217)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0000)	lr 0.001930
    epoch: [16/125][40/176]	time 0.437 (0.466)	data 0.000 (0.013)	eta 2:30:01	loss_t 0.3143 (0.3204)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0000)	lr 0.001930
    epoch: [16/125][60/176]	time 0.465 (0.463)	data 0.000 (0.009)	eta 2:28:52	loss_t 0.3013 (0.3152)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0000)	lr 0.001930
    epoch: [16/125][80/176]	time 0.449 (0.462)	data 0.000 (0.007)	eta 2:28:23	loss_t 0.3251 (0.3131)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0120)	lr 0.001930
    epoch: [16/125][100/176]	time 0.451 (0.459)	data 0.000 (0.005)	eta 2:27:25	loss_t 0.3010 (0.3118)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0865)	lr 0.001930
    epoch: [16/125][120/176]	time 0.451 (0.457)	data 0.000 (0.005)	eta 2:26:37	loss_t 0.3256 (0.3108)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0721)	lr 0.001930
    epoch: [16/125][140/176]	time 0.437 (0.454)	data 0.000 (0.004)	eta 2:25:32	loss_t 0.3028 (0.3102)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0687)	lr 0.001930
    epoch: [16/125][160/176]	time 0.441 (0.452)	data 0.000 (0.003)	eta 2:24:43	loss_t 0.3011 (0.3095)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0962)	lr 0.001930
    ...
    epoch: [60/125][20/176]	time 0.447 (0.484)	data 0.000 (0.028)	eta 1:33:28	loss_t 0.3005 (0.3013)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0000)	lr 0.001088
    epoch: [60/125][40/176]	time 0.441 (0.466)	data 0.000 (0.014)	eta 1:29:58	loss_t 0.3003 (0.3009)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0962)	lr 0.001088
    epoch: [60/125][60/176]	time 0.442 (0.460)	data 0.000 (0.009)	eta 1:28:39	loss_t 0.3003 (0.3008)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.1923)	lr 0.001088
    epoch: [60/125][80/176]	time 0.450 (0.458)	data 0.000 (0.007)	eta 1:28:03	loss_t 0.3007 (0.3007)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.2404)	lr 0.001088
    epoch: [60/125][100/176]	time 0.450 (0.457)	data 0.000 (0.006)	eta 1:27:41	loss_t 0.3001 (0.3006)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.2981)	lr 0.001088
    epoch: [60/125][120/176]	time 0.451 (0.456)	data 0.000 (0.005)	eta 1:27:21	loss_t 0.3000 (0.3006)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.2804)	lr 0.001088
    epoch: [60/125][140/176]	time 0.453 (0.455)	data 0.000 (0.004)	eta 1:27:03	loss_t 0.3004 (0.3005)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.2679)	lr 0.001088
    epoch: [60/125][160/176]	time 0.451 (0.455)	data 0.000 (0.004)	eta 1:26:49	loss_t 0.3005 (0.3005)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.2344)	lr 0.001088
    ...
    epoch: [125/125][20/176]	time 0.439 (0.479)	data 0.000 (0.029)	eta 0:01:14	loss_t 0.3000 (0.3000)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0000)	lr 0.000000
    epoch: [125/125][40/176]	time 0.450 (0.468)	data 0.000 (0.014)	eta 0:01:03	loss_t 0.3000 (0.3000)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0000)	lr 0.000000
    epoch: [125/125][60/176]	time 0.453 (0.460)	data 0.000 (0.010)	eta 0:00:53	loss_t 0.3000 (0.3000)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0641)	lr 0.000000
    epoch: [125/125][80/176]	time 0.449 (0.457)	data 0.000 (0.007)	eta 0:00:43	loss_t 0.3000 (0.3000)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.0481)	lr 0.000000
    epoch: [125/125][100/176]	time 0.454 (0.456)	data 0.000 (0.006)	eta 0:00:34	loss_t 0.3000 (0.3000)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.1154)	lr 0.000000
    epoch: [125/125][120/176]	time 0.436 (0.453)	data 0.000 (0.005)	eta 0:00:25	loss_t 0.3000 (0.3000)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.1923)	lr 0.000000
    epoch: [125/125][140/176]	time 0.453 (0.451)	data 0.000 (0.004)	eta 0:00:16	loss_t 0.3000 (0.3000)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.1923)	lr 0.000000
    epoch: [125/125][160/176]	time 0.450 (0.452)	data 0.000 (0.004)	eta 0:00:07	loss_t 0.3000 (0.3000)	loss_x 6.6606 (6.6606)	acc 0.0000 (0.1923)	lr 0.000000
    => Final test
    ##### Evaluating prai1581 (source) #####
    Extracting features from query set ...
    Done, obtained 4680-by-2048 matrix
    Extracting features from gallery set ...
    Done, obtained 15258-by-2048 matrix
    Speed: 0.0128 sec/batch
    Normalzing features with L2 norm ...
    Computing distance matrix with metric=euclidean ...
    Computing CMC and mAP ...
    ** Results **
    mAP: 0.2%
    CMC curve
    Rank-1  : 0.1%
    Rank-5  : 0.6%
    Rank-10 : 0.9%
    Rank-20 : 1.7%
    Checkpoint saved to "/net/merkur/storage/deeplearning/users/morlen/log/resnet50_prai1581_triplet_cosinelr6/model/model.pth.tar-125"
    Elapsed 2:39:56
    

    Please help me understand how to use triplet loss with torchreid.

    opened by lennartmoritz 10
  • Memory size required for Market1501-500k

    Memory size required for Market1501-500k

    Hi,

    I have trained models on Market1501-500k but it used all memory free space on evaluating process and made the machine freeze due to out-of-memory problem.

    I have 48gb of RAM. Is it enough?

    or did I do something wrong? this is the parameters I use to train the model.

    python scripts/main.py \
    --root $DATA \
    --app image \
    --loss softmax \
    --label-smooth \
    -s market1501 \
    --market1501-500k \
    -a resnet50_fc512 \
    --optim adam \
    --lr 0.0003 \
    --max-epoch 60 \
    --stepsize 20 40 \
    --batch-size 32 \
    --save-dir log/model-market1501-softmax \
    --gpu-devices 0
    
    opened by crossknight 10
  • error occurs while training

    error occurs while training

    Epoch: [10/60][400/404] Time 0.173 (0.188) Data 0.000 (0.011) Loss 1.6979 (1.5605) Acc 81.25 (91.09) Lr 0.000300 eta 1:03:19

    Evaluating market1501 (source)

    Extracting features from query set ... Done, obtained 3368-by-2048 matrix Extracting features from gallery set ... Done, obtained 15913-by-2048 matrix Speed: 0.0211 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ...

    "ValueError Traceback (most recent call last) in 4 eval_freq=10, 5 print_freq=10, ----> 6 test_only=False 7 )

    ~\Desktop\deep-person-reid-master\deep-person-reid-master\torchreid\engine\engine.py in run(self, save_dir, max_epoch, start_epoch, print_freq, fixbase_epoch, open_layers, start_eval, eval_freq, test_only, dist_metric, normalize_feature, visrank, visrank_topk, use_metric_cuhk03, ranks, rerank) 141 save_dir=save_dir, 142 use_metric_cuhk03=use_metric_cuhk03, --> 143 ranks=ranks 144 ) 145 self._save_checkpoint(epoch, rank1, save_dir)

    ~\Desktop\deep-person-reid-master\deep-person-reid-master\torchreid\engine\engine.py in test(self, epoch, dist_metric, normalize_feature, visrank, visrank_topk, save_dir, use_metric_cuhk03, ranks, rerank) 225 use_metric_cuhk03=use_metric_cuhk03, 226 ranks=ranks, --> 227 rerank=rerank 228 ) 229

    ~\anaconda3\envs\torchreid\lib\site-packages\torch\autograd\grad_mode.py in decorate_no_grad(*args, **kwargs) 47 def decorate_no_grad(*args, **kwargs): 48 with self: ---> 49 return func(*args, **kwargs) 50 return decorate_no_grad 51

    ~\Desktop\deep-person-reid-master\deep-person-reid-master\torchreid\engine\engine.py in _evaluate(self, epoch, dataset_name, query_loader, gallery_loader, dist_metric, normalize_feature, visrank, visrank_topk, save_dir, use_metric_cuhk03, ranks, rerank) 300 q_camids, 301 g_camids, --> 302 use_metric_cuhk03=use_metric_cuhk03 303 ) 304

    ~\Desktop\deep-person-reid-master\deep-person-reid-master\torchreid\metrics\rank.py in evaluate_rank(distmat, q_pids, g_pids, q_camids, g_camids, max_rank, use_metric_cuhk03, use_cython) 199 return evaluate_cy( 200 distmat, q_pids, g_pids, q_camids, g_camids, max_rank, --> 201 use_metric_cuhk03 202 ) 203 else:

    ~\Desktop\deep-person-reid-master\deep-person-reid-master\torchreid\metrics\rank_cylib\rank_cy.pyx in torchreid.metrics.rank_cylib.rank_cy.evaluate_cy() 22 23 # Main interface ---> 24 cpdef evaluate_cy(distmat, q_pids, g_pids, q_camids, g_camids, max_rank, use_metric_cuhk03=False): 25 distmat = np.asarray(distmat, dtype=np.float32) 26 q_pids = np.asarray(q_pids, dtype=np.int64)

    ~\Desktop\deep-person-reid-master\deep-person-reid-master\torchreid\metrics\rank_cylib\rank_cy.pyx in torchreid.metrics.rank_cylib.rank_cy.evaluate_cy() 30 if use_metric_cuhk03: 31 return eval_cuhk03_cy(distmat, q_pids, g_pids, q_camids, g_camids, max_rank) ---> 32 return eval_market1501_cy(distmat, q_pids, g_pids, q_camids, g_camids, max_rank) 33 34

    ValueError: Buffer dtype mismatch, expected 'long' but got 'long long'"

    opened by berkdenizi 8
  • why always show 'cuda out of memory'

    why always show 'cuda out of memory'

    it has nothing to do with batch size,

    => Start training

    • Only train ['classifier'] (epoch: 1/10) Traceback (most recent call last): File "scripts/main.py", line 164, in main() File "scripts/main.py", line 160, in main engine.run(**engine_run_kwargs(cfg)) File "/home/guanyonglai/gyl/goods_project/osnet-deep-person-reid/torchreid/engine/engine.py", line 122, in run open_layers=open_layers File "/home/guanyonglai/gyl/goods_project/osnet-deep-person-reid/torchreid/engine/image/softmax.py", line 95, in train outputs = self.model(imgs) File "/home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 148, in forward inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) File "/home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 159, in scatter return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) File "/home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 35, in scatter_kwargs inputs = scatter(inputs, target_gpus, dim) if inputs else [] File "/home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 28, in scatter return scatter_map(inputs) File "/home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 15, in scatter_map return list(zip(*map(scatter_map, obj))) File "/home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 13, in scatter_map return Scatter.apply(target_gpus, None, dim, obj) File "/home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/nn/parallel/_functions.py", line 89, in forward outputs = comm.scatter(input, target_gpus, chunk_sizes, ctx.dim, streams) File "/home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/cuda/comm.py", line 147, in scatter return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams)) RuntimeError: CUDA error: out of memory (malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:241) frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f01581aa441 in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f01581a9d7a in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libc10.so) frame #2: + 0x1581d (0x7f015775881d in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) frame #3: + 0x16247 (0x7f0157759247 in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) frame #4: at::native::empty_cuda(c10::ArrayRef, c10::TensorOptions const&) + 0x121 (0x7f0087143f81 in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libcaffe2_gpu.so) frame #5: at::CUDAType::empty(c10::ArrayRef, c10::TensorOptions const&) const + 0x19b (0x7f0085d8b6fb in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libcaffe2_gpu.so) frame #6: torch::autograd::VariableType::empty(c10::ArrayRef, c10::TensorOptions const&) const + 0x284 (0x7f0152824094 in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libtorch.so.1) frame #7: at::native::to(at::Tensor const&, c10::TensorOptions const&, bool, bool) + 0x506 (0x7f0129a98666 in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libcaffe2.so) frame #8: at::TypeDefault::to(at::Tensor const&, c10::TensorOptions const&, bool, bool) const + 0x17 (0x7f0129d17857 in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libcaffe2.so) frame #9: torch::autograd::VariableType::to(at::Tensor const&, c10::TensorOptions const&, bool, bool) const + 0x2c2 (0x7f015270db52 in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libtorch.so.1) frame #10: torch::cuda::scatter(at::Tensor const&, c10::ArrayRef, c10::optional<std::vector<long, std::allocator > > const&, long, c10::optional<std::vector<c10::optionalc10::cuda::CUDAStream, std::allocator<c10::optionalc10::cuda::CUDAStream > > > const&) + 0x389 (0x7f0152c61fd9 in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libtorch.so.1) frame #11: + 0x5a41cf (0x7f0158b9a1cf in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #12: + 0x130fac (0x7f0158726fac in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
    frame #20: THPFunction_apply(_object*, _object*) + 0x6b1 (0x7f01589aa301 in /home/guanyonglai/anaconda3/envs/torchreid/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
    opened by guanyonglai 8
  • Filter out background in bboxes by segmentation before classification

    Filter out background in bboxes by segmentation before classification

    Hi @KaiyangZhou. I was wondering if you ever did any experiments regarding the possibility of filtering out the background in the people bounding boxes by using instance segmentation models and how that changed the ReID performance. Some of the bounding boxes could contain people in the background or parts of them which could confuse the model, leading to less accurate identification results. I have some results here for tracking and some example images of what I input to your ReID models: https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet/wiki/Masked-detection-crops-vs-regular-detection-crops-for-ReID-feature-extraction. IDF1 (the ratio of correctly identified detections over the average number of ground-truth and computed detections) increases quite a lot given that the models have not been trained this way. Do you think that retraining the models with greyed-out background would increase the classification performance dramatically?

    opened by mikel-brostrom 0
  • Problem with triplet loss

    Problem with triplet loss

    I have 2 changes in original code, first change network from "resnet" to "osnet" (osnet_x1_0). second change loss function from "softmax" to "triplet". and I get this error. can anyone help me? @KaiyangZhou Screenshot (8)

    opened by shayan-aqabarary 0
  • ModuleNotFoundError: No module named torchreid.utils

    ModuleNotFoundError: No module named torchreid.utils

    (torchreid) [email protected]:~/Downloads$ cd deep-person-reid/ (torchreid) socia[email protected]:~/Downloads/deep-person-reid$ python scripts/main.py
    --config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad_cosine.yaml
    --transforms random_flip random_erase
    --root $PATH_TO_DATA /home/socialab/anaconda3/envs/torchreid/lib/python3.7/site-packages/torchreid/reid/metrics/rank.py:12: UserWarning: Cython evaluation (very fast so highly recommended) is unavailable, now use python evaluation. 'Cython evaluation (very fast so highly recommended) is ' Traceback (most recent call last): File "scripts/main.py", line 9, in from torchreid.utils import ( ModuleNotFoundError: No module named 'torchreid.utils'

    opened by kailaspanu 0
Releases(v1.0.6)
  • v1.0.6(Oct 23, 2019)

  • v1.0.5(Oct 23, 2019)

  • v1.0.0(Aug 26, 2019)

    Major updates:

    • Significant changes to "scripts/main.py", where most arguments in argparse are replaced with config.
    • Add config files which contain predefined training params to configs/.
    • In data manager and engine, use_cpu is changed to use_gpu.
    • In data manager, batch_size is changed to batch_size_train and batch_size_test.
    • Tensorboard is available.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.1(Aug 4, 2019)

    Main update:

    • Plot ranks in a single figure (currently support image-reid)
    • Visualize activation maps to understand where CNNs focus on to extract features for reid.
    Source code(tar.gz)
    Source code(zip)
  • v0.8.1(Jul 8, 2019)

  • v0.8.0(Jul 3, 2019)

  • v0.7.8(May 28, 2019)

  • v0.7.7(May 24, 2019)

  • v0.7.5(May 9, 2019)

    Major updates:

    • Added person reranking, which can be activated with rerank=True in engine.run(). It only works in the evaluation mode, i.e. test_only=True.
    • Added normalize_feature to engine.run() to allow the feature vectors to be normalized with L2 norm before calculating feature distance.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.4(Apr 27, 2019)

    Major changes:

    • Added conda install instructions.
    • Fixed a bug in learning rate scheduler https://github.com/KaiyangZhou/deep-person-reid/commit/dcd8da565a9802bf48e8694e616e633a51b413a3
    • Fixed a bug in combineall https://github.com/KaiyangZhou/deep-person-reid/commit/bb6dc46e21e335789b9d891f7191c1da3a5d2e01
    Source code(tar.gz)
    Source code(zip)
  • v0.7.3(Apr 18, 2019)

    Major changes

    • https://github.com/KaiyangZhou/deep-person-reid/commit/1e9d466f42256cc451f6f73761c298cabbcd0b39
    • https://github.com/KaiyangZhou/deep-person-reid/commit/4a033659b0330bcbd25dc1cc344cf26ddd69ac73
    Source code(tar.gz)
    Source code(zip)
  • v0.7.2(Mar 25, 2019)

  • v0.7.1(Mar 25, 2019)

    Bug fix:

    • https://github.com/KaiyangZhou/deep-person-reid/commit/235348f67248a9d27d5c9bcafcabcfe5cbf61cb3: return ImageDataset or VideoDataset rather than Dataset.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Mar 25, 2019)

  • v0.5.0(Nov 12, 2018)

    Major updates:

    • Model codes such as resnet.py and densenet.py keep the original style for easier modification.
    • Generalize CrossEntropyLableSmooth to CrossEntropyLoss. --label-smooth should be called explicitly in order to add the label smoothing regularizer to the cross entropy loss.
    • Add support to multi-dataset training. Datasets are specified by the arguments -s and -t, which refer to source datasets and target datasets, respectively. Both can take multiple strings delimited by space. For example, say you wanna train a model using Market1501+DukeMTMC-reID, just set -s market1501 dukemtmcreid. If you wanna test on multiple datasets, you can do -t market1501 dukemtmcreid cuhk03 msmt17.
    • Arguments are unified in args.py.
    • Dataloaders are wrapped into two classes, which are ImageDataManager and VideoDataManager (see data_manager.py). A datamanger is initialized by dm = ImageDataManager(use_gpu, **image_dataset_kwargs(args)) where image_dataset_kwargs() is implemented in args.py. Therefore, when new arguments are added to the data manager, you don't need to exhausively change everywhere in the code. What you need to update are (1) add new arguments in args.py and (2) update the input arguments in data_manager.py.
    • BENCHMARK is replaced with MODEL_ZOO where model weights and training scripts can be downloaded.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Aug 15, 2018)

    • Added --lambda-xent and --lambda-htri in xxx_xent_htri.py, which can balance between cross entropy loss and hard mining triplet loss.
    • Divided losses into separate files for easier extension.
    • Moved major codes to the folder torchreid/ (such structuring will be maintained).
    • Automated the download of dukemtmcreid and dukemtmcvidreid.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Aug 1, 2018)

    • Added --load-weights (weights that don't match in size will be discarded, e.g. old classification layer).
    • Updated dukemtmcvidreid naming, old/new formats are supported.
    • Added --vis-ranked-res and reidtools.py, allowing ranked images to be visualized.

    Note: --use-lmdb is postponed.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jul 6, 2018)

    To be done:

    • --lmdb is under development.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jun 4, 2018)

  • v0.0.9(Jun 4, 2018)

    • multi-GPU training.
    • both image-based and video-based reid.
    • unified interface for different reid models.
    • easy dataset preparation.
    • end-to-end training and evaluation.
    • standard dataset splits used by most papers.
    • download of trained models.
    Source code(tar.gz)
    Source code(zip)
Owner
Kaiyang
Researcher in computer vision and machine learning :)
Kaiyang
This project uses ViT to perform image classification tasks on DATA set CIFAR10.

Vision-Transformer-Multiprocess-DistributedDataParallel-Apex Introduction This project uses ViT to perform image classification tasks on DATA set CIFA

Kaicheng Yang 3 Jun 03, 2022
BTC-Generator - BTC Generator With Python

Что такое BTC-Generator? Это генератор чеков всеми любимого @BTC_BANKER_BOT Для

DoomGod 3 Aug 24, 2022
🛰️ List of earth observation companies and job sites

Earth Observation Companies & Jobs source Portals & Jobs Geospatial Geospatial jobs newsletter: ~biweekly newsletter with geospatial jobs by Ali Ahmad

Dahn 64 Dec 27, 2022
Traductor de lengua de señas al español basado en Python con Opencv y MedaiPipe

Traductor de señas Traductor de lengua de señas al español basado en Python con Opencv y MedaiPipe Requerimientos 🔧 Python 3.8 o inferior para evitar

Jahaziel Hernandez Hoyos 3 Nov 12, 2022
Using PyTorch Perform intent classification using three different models to see which one is better for this task

Using PyTorch Perform intent classification using three different models to see which one is better for this task

Yoel Graumann 1 Feb 14, 2022
Unofficial PyTorch implementation of MobileViT based on paper "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer".

MobileViT RegNet Unofficial PyTorch implementation of MobileViT based on paper MOBILEVIT: LIGHT-WEIGHT, GENERAL-PURPOSE, AND MOBILE-FRIENDLY VISION TR

Hong-Jia Chen 91 Dec 02, 2022
This repository contains several image-to-image translation models, whcih were tested for RGB to NIR image generation. The models are Pix2Pix, Pix2PixHD, CycleGAN and PointWise.

RGB2NIR_Experimental This repository contains several image-to-image translation models, whcih were tested for RGB to NIR image generation. The models

5 Jan 04, 2023
Self-Supervised Monocular DepthEstimation with Internal Feature Fusion(arXiv), BMVC2021

DIFFNet This repo is for Self-Supervised Monocular DepthEstimation with Internal Feature Fusion(arXiv), BMVC2021 A new backbone for self-supervised de

Hang 94 Dec 25, 2022
A Closer Look at Reference Learning for Fourier Phase Retrieval

A Closer Look at Reference Learning for Fourier Phase Retrieval This repository contains code for our NeurIPS 2021 Workshop on Deep Learning and Inver

Tobias Uelwer 1 Oct 28, 2021
Soft actor-critic is a deep reinforcement learning framework for training maximum entropy policies in continuous domains.

This repository is no longer maintained. Please use our new Softlearning package instead. Soft Actor-Critic Soft actor-critic is a deep reinforcement

Tuomas Haarnoja 752 Jan 07, 2023
Aerial Imagery dataset for fire detection: classification and segmentation (Unmanned Aerial Vehicle (UAV))

Aerial Imagery dataset for fire detection: classification and segmentation using Unmanned Aerial Vehicle (UAV) Title FLAME (Fire Luminosity Airborne-b

79 Jan 06, 2023
The code for our paper Semi-Supervised Learning with Multi-Head Co-Training

Semi-Supervised Learning with Multi-Head Co-Training (PyTorch) Abstract Co-training, extended from self-training, is one of the frameworks for semi-su

cmc 6 Dec 04, 2022
Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis (CVPR2022)

Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis Multi-View Consistent Generative Adversarial Networks for 3D-aware

Xuanmeng Zhang 78 Dec 10, 2022
Rainbow DQN implementation that outperforms the paper's results on 40% of games using 20x less data 🌈

Rainbow 🌈 An implementation of Rainbow DQN which outperforms the paper's (Hessel et al. 2017) results on 40% of tested games while using 20x less dat

Dominik Schmidt 31 Dec 21, 2022
A sample pytorch Implementation of ACL 2021 research paper "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction".

Span-ASTE-Pytorch This repository is a pytorch version that implements Ali's ACL 2021 research paper Learning Span-Level Interactions for Aspect Senti

来自丹麦的天籁 10 Dec 06, 2022
IndoNLI: A Natural Language Inference Dataset for Indonesian

IndoNLI: A Natural Language Inference Dataset for Indonesian This is a repository for data and code accompanying our EMNLP 2021 paper "IndoNLI: A Natu

15 Feb 10, 2022
Automate issue discovery for your projects against Lightning nightly and releases.

Automated Testing for Lightning EcoSystem Projects Automate issue discovery for your projects against Lightning nightly and releases. You get CPUs, Mu

Pytorch Lightning 41 Dec 24, 2022
StarGAN-ZSVC: Unofficial PyTorch Implementation

This repository is an unofficial PyTorch implementation of StarGAN-ZSVC by Matthew Baas and Herman Kamper. This repository provides both model architectures and the code to inference or train them.

Jirayu Burapacheep 11 Aug 28, 2022
PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer.

Unsupervised_IEPGAN This is the PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer. Ha

25 Oct 26, 2022
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

170.1k Jan 05, 2023