TVT
Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation
Datasets:
-
Digit: MNIST, SVHN, USPS
-
Object: Office, Office-Home, VisDA-2017
Training:
Code of ViT is largely borrowed from ViT-pytorch
Citation:
@article{yang2021tvt,
title={TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation},
author={Yang, Jinyu and Liu, Jingjing and Xu, Ning and Huang, Junzhou},
journal={arXiv preprint arXiv:2108.05988},
year={2021}
}