Code for the ICML 2021 paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

Overview

ViLT

Code for the paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"


The main figure

Install

pip install -r requirements.txt
pip install -e .

Download Pretrained Weights

We provide five pretrained weights

  1. ViLT-B/32 Pretrained with MLM+ITM for 200k steps on GCC+SBU+COCO+VG (ViLT-B/32 200k) link
  2. ViLT-B/32 200k finetuned on VQAv2 link
  3. ViLT-B/32 200k finetuned on NLVR2 link
  4. ViLT-B/32 200k finetuned on COCO IR/TR link
  5. ViLT-B/32 200k finetuned on F30K IR/TR link

Out-of-the-box MLM + Visualization Demo

pip install gradio==1.6.4
python demo.py with num_gpus=<0 if you have no gpus else 1> load_path="<YOUR_WEIGHT_ROOT>/vilt_200k_mlm_itm.ckpt"

ex)
python demo.py with num_gpus=0 load_path="weights/vilt_200k_mlm_itm.ckpt"

Out-of-the-box VQA Demo

pip install gradio==1.6.4
python demo_vqa.py with num_gpus=<0 if you have no gpus else 1> load_path="<YOUR_WEIGHT_ROOT>/vilt_vqa.ckpt" test_only=True

ex)
python demo_vqa.py with num_gpus=0 load_path="weights/vilt_vqa.ckpt" test_only=True

Dataset Preparation

See DATA.md

Train New Models

See TRAIN.md

Evaluation

See EVAL.md

Citation

If you use any part of this code and pretrained weights for your own purpose, please cite our paper.

@article{kim2021vilt,
  title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
  author={Kim, Wonjae and Son, Bokyung and Kim, Ildoo},
  journal={arXiv preprint arXiv:2102.03334},
  year={2021}
}

Contact for Issues

Comments
  • Add ViLT to HuggingFace Transformers

    Add ViLT to HuggingFace Transformers

    Hi,

    I've been reading the ViLT paper and was impressed by the simplicity, as it only adds text embeddings to a ViT.

    As ViT is already available in HuggingFace Transformers, adding ViLT should be relatively easy.

    I've currently implemented the model (see here for my current implementation). It includes a conversion script (convert_vilt_original_to_pytorch.py) to convert the weights from this repository (the PyTorch Lightning module) to its HuggingFace counterpart, for all models (base one + the ones with a head on top).

    However, I'm facing some issues when performing a forward pass with the original implementation in Google Colab (when just doing pip install -r requirements.txt and running the demo_vqa.py script, you get the following):

    Traceback (most recent call last):
      File "demo_vqa.py", line 17, in <module>
        from vilt.modules import ViLTransformerSS
      File "/content/ViLT/vilt/modules/__init__.py", line 1, in <module>
        from .vilt_module import ViLTransformerSS
      File "/content/ViLT/vilt/modules/vilt_module.py", line 3, in <module>
        import pytorch_lightning as pl
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/__init__.py", line 62, in <module>
        from pytorch_lightning import metrics
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/__init__.py", line 14, in <module>
        from pytorch_lightning.metrics.metric import Metric
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/metric.py", line 23, in <module>
        from pytorch_lightning.metrics.utils import _flatten, dim_zero_cat, dim_zero_mean, dim_zero_sum
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/utils.py", line 18, in <module>
        from pytorch_lightning.utilities import rank_zero_warn
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/__init__.py", line 24, in <module>
        from pytorch_lightning.utilities.apply_func import move_data_to_device
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/apply_func.py", line 25, in <module>
        from torchtext.data import Batch
    ImportError: cannot import name 'Batch' from 'torchtext.data' (/usr/local/lib/python3.7/dist-packages/torchtext/data/__init__.py)
    
    If you suspect this is an IPython bug, please report it at:
        https://github.com/ipython/ipython/issues
    or send an email to the mailing list at [email protected]
    
    You can print a more detailed traceback right now with "%tb", or use "%debug"
    to interactively debug it.
    
    Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
        %config Application.verbose_crash=True
    

    Upgrading PyTorch Lightning to the latest version also returns an error:

    Traceback (most recent call last):
      File "demo_vqa.py", line 17, in <module>
        from vilt.modules import ViLTransformerSS
      File "/content/ViLT/vilt/modules/__init__.py", line 1, in <module>
        from .vilt_module import ViLTransformerSS
      File "/content/ViLT/vilt/modules/vilt_module.py", line 7, in <module>
        from vilt.modules import heads, objectives, vilt_utils
      File "/content/ViLT/vilt/modules/vilt_utils.py", line 11, in <module>
        from vilt.gadgets.my_metrics import Accuracy, VQAScore, Scalar
      File "/content/ViLT/vilt/gadgets/my_metrics.py", line 2, in <module>
        from pytorch_lightning.metrics import Metric
    ModuleNotFoundError: No module named 'pytorch_lightning.metrics'
    

    As PL deprecated the metrics module.

    Are you able to provide a simple Colab notebook to perform inference on an image+text pair?

    Thanks!

    opened by NielsRogge 13
  • python run.py with data_root=/data/workspace/dataset num_gpus=4 num_nodes=1 task_finetune_irtr_f30k_randaug per_gpu_batchsize=4 load_path=

    python run.py with data_root=/data/workspace/dataset num_gpus=4 num_nodes=1 task_finetune_irtr_f30k_randaug per_gpu_batchsize=4 load_path="weights/vilt_200k_mlm_itm.ckpt"

    Saving latest checkpoint... INFO - lightning - Saving latest checkpoint... ERROR - ViLT - Failed after 1:05:38! Traceback (most recent calls WITHOUT Sacred internals): File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 524, in train self.train_loop.run_training_epoch() File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 572, in run_training_epoch batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 704, in run_training_batch self.trainer.hiddens) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 818, in training_step_and_backward result = self.training_step(split_batch, batch_idx, opt_idx, hiddens) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 339, in training_step training_step_output = self.trainer.accelerator_backend.training_step(args) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 158, in training_step return self._step(args) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 170, in _step output = self.trainer.model(*args) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/overrides/data_parallel.py", line 179, in forward output = self.module.training_step(*inputs[0], **kwargs[0]) File "/data/workspace/ViLT/vilt/modules/vilt_module.py", line 219, in training_step vilt_utils.set_task(self) File "/data/workspace/ViLT/vilt/modules/vilt_utils.py", line 177, in set_task picked = all_gather(current_tasks) File "/data/workspace/ViLT/vilt/modules/dist_utils.py", line 165, in all_gather size_list, tensor = _pad_to_largest_tensor(tensor, group) File "/data/workspace/ViLT/vilt/modules/dist_utils.py", line 129, in _pad_to_largest_tensor dist.all_gather(size_list, local_size, group=group) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1870, in all_gather work.wait() RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/unbound_buffer.cc:84] Timed out waiting 1800000ms for recv operation to complete

    During handling of the above exception, another exception occurred:

    Traceback (most recent calls WITHOUT Sacred internals): File "/data/workspace/ViLT/run.py", line 72, in main trainer.fit(model, datamodule=dm) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 473, in fit results = self.accelerator_backend.train() File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 152, in train results = self.ddp_train(process_idx=self.task_idx, model=model) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 305, in ddp_train results = self.train_or_test() File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 69, in train_or_test results = self.trainer.train() File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 555, in train self.train_loop.on_train_end() File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 200, in on_train_end self.check_checkpoint_callback(should_save=True, is_last=True) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 234, in check_checkpoint_callback callback.on_validation_end(self.trainer, model) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 203, in on_validation_end self.save_checkpoint(trainer, pl_module) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 238, in save_checkpoint self._validate_monitor_key(trainer) File "/root/anaconda3/envs/vilt/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 516, in _validate_monitor_key raise MisconfigurationException(m) pytorch_lightning.utilities.exceptions.MisconfigurationException: ModelCheckpoint(monitor='val/the_metric') not found in the returned metrics: ['irtr/train/irtr_loss', 'itm/train/loss', 'itm/train/wpa_loss', 'itm/train/accuracy']. HINT: Did you call self.log('val/the_metric', tensor) in the LightningModule?

    Epoch 0: 0%| | 24/9691 [30:14<202:59:08, 75.59s/it, loss=0.579, v_num=0]

    opened by raojay7 9
  • got error when I turn on mpp (Masked Patch Prediction)

    got error when I turn on mpp (Masked Patch Prediction)

    Hi, @dandelin

    When I turn on the mpp (Masked Patch Prediction), I get this error:

    AttributeError: 'VisionTransformer' object has no attribute 'mask_token'

    The above error is appear in vision_transformer.py. Could you please tell me how to address it?

    Thank you for your help.

    Best regards, Ge-Peng.

    opened by GewelsJI 8
  • Unable to reproduce the 100k results

    Unable to reproduce the 100k results

    Dear Authors, Thanks for open sourcing the code. I tried pretrain 100k steps and finetune on vqav2, but my dev-test score is about 65, unlike the 70.8 on the paper.

    Here is my pretrain and finetune command

    python run.py with data_root=vilt_dataset/ \
    	num_gpus=8 num_nodes=8 task_mlm_itm whole_word_masking=True step100k \
    	per_gpu_batchsize=64 exp_name=pretrain 
    
    python run.py with data_root=vilt_dataset/ \
    	num_gpus=8 num_nodes=1 task_finetune_vqa_randaug \
    	per_gpu_batchsize=32 load_path="result/pretrain_seed0_from_/version_0/checkpoints/last.ckpt" \
    	exp_name=vqa_finetune
    

    Generate JSON with

    python run.py with data_root=vilt_dataset/ \
    	num_gpus=4 num_nodes=1 task_finetune_vqa \
    	per_gpu_batchsize=256 load_path="result/vqa_finetune_seed0_from_last/version_0/checkpoints/last.ckpt" \
    	test_only=True  exp_name="test_vqa"
    

    here is my pretraining and finetuning tb log Screen Shot 2021-06-10 at 6 34 22 PM Screen Shot 2021-06-10 at 6 34 28 PM Screen Shot 2021-06-10 at 6 35 14 PM

    opened by JACKHAHA363 8
  • Possible out-of-memory issue of dataloader

    Possible out-of-memory issue of dataloader

    Hello,

    I have read through your code, but haven't run the code yet. One question about the dataloader implementation. According to

    https://github.com/dandelin/ViLT/blob/master/vilt/datasets/base_dataset.py#L43

    You load all the arrow files into memory. The pre-training data have hundreds of gigabytes. Is it possible that this may cause out-of-memory issue? Or does this implementation assume large machine memory?

    Thanks,

    opened by zhiqiangdon 4
  • AttributeError: module 'vilt' has no attribute 'modules'

    AttributeError: module 'vilt' has no attribute 'modules'

    I run into an error

    File "run.py",line 6, in from vilt.modules import ViLTransformerSS File "ViLT/vilt/moudles/vilt/moudules/init.py",line 1, in form .vilt_module import ViLTransformerSS File "ViLT/vilt/moudles/vilt_moudule.py",line 4, in import vilt.module.vision_transformer as vit AttributeError: module 'vilt' has no attribute 'modules'

    when I run the "Evaluate VQAv2" command

    opened by leonodelee 4
  • I reproduced the code of pytorch version, but get different result

    I reproduced the code of pytorch version, but get different result

    In Image Retrieval, the [email protected] is 68.4 which is higher than 61.9 in paper In Text retrieval, the [email protected] is 73.5 which is lower than 81.4 in paper

    So, I want if the input format is error in my code.

    In image, I use "pixelbert_transform" function of size=384. In Text, I use Bert base tokenizer with max len 40 which includes [CLS], word tokens and without [SEP]. In flickr-30k, I use dataset_flickr30k.json to get test datasets, and I chose the first caption of five about each image.

    Thanks very much for your help!

    opened by NostalgiaOfTime 4
  • Why is answers set to 0 for irtr even for the positive case?

    Why is answers set to 0 for irtr even for the positive case?

    Hello, thanks for the amazing repository. If I understand correct, for IRTR, answer should be 1 for the first element which is true, and 0 for the remaining false texts (https://github.com/dandelin/ViLT/blob/master/vilt/modules/objectives.py#L429)

    But in the code, it sets all of them including the positive sample to be zero. Am I missing something here? Thanks!

    opened by TheShadow29 3
  • Can you share 'vqa_dict.json' file for vqa_demo?

    Can you share 'vqa_dict.json' file for vqa_demo?

    Hi, @dandelin Thank you for your interesting work, I am runing demo_vqa.py and failed at line 38 https://dl.dropboxusercontent.com/s/otya4i5sagt4f5p/vqa_dict.json, because this link is unavailable now. Do you download this file? Also, vilt_200k_mlm_itm.ckpt link is already unavailable... Wishing for your reply!

    opened by Senwang98 3
  • Reproduce Flickr30k Evaluation results - DataSet problem

    Reproduce Flickr30k Evaluation results - DataSet problem

    Hello again @dandelin ,

    I was trying to reproduce the steps from https://github.com/dandelin/ViLT/blob/master/EVAL.md to get the results from Flickr30k T2IR.

    First I did what is suggested in https://github.com/dandelin/ViLT/blob/master/DATA.md.

    So I have in a folder /content/flickr30k this structure:

        /content/flickr30k
         ├── flickr30k_images            
         │   ├── ....jpg
         |   ├── ....jpg
         ├── karpathy          
             ├── dataset_flickr30k.json              
    

    Then I do the transformation:

    from vilt.utils.write_f30k_karpathy import make_arrow
    make_arrow( '/content/flickr30k',  '/content/arrow')
    

    But when I run:

    python run.py with data_root='/content/arrow' num_gpus=1 num_nodes=1 per_gpu_batchsize=4 task_finetune_irtr_f30k_randaug test_only=True load_path="/content/TFM_Sparse_Embeddings/vilt_irtr_f30k.ckpt"
    

    I get the error:

    ERROR - ViLT - Failed after 0:00:06!
    Traceback (most recent calls WITHOUT Sacred internals):
      File "run.py", line 73, in main
        trainer.test(model, datamodule=dm)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 755, in test
        results = self.__test_given_model(model, test_dataloaders)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 820, in __test_given_model
        results = self.fit(model)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 473, in fit
        results = self.accelerator_backend.train()
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 152, in train
        results = self.ddp_train(process_idx=self.task_idx, model=model)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 305, in ddp_train
        results = self.train_or_test()
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 67, in train_or_test
        results = self.trainer.run_test()
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 662, in run_test
        eval_loop_results, _ = self.run_evaluation(test_mode=True)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 566, in run_evaluation
        dataloaders, max_batches = self.evaluation_loop.get_evaluation_dataloaders(max_batches)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 56, in get_evaluation_dataloaders
        self.trainer.reset_test_dataloader(model)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/data_loading.py", line 299, in reset_test_dataloader
        self._reset_eval_dataloader(model, 'test')
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/data_loading.py", line 249, in _reset_eval_dataloader
        num_batches = len(dataloader) if has_len(dataloader) else float('inf')
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/data.py", line 33, in has_len
        raise ValueError('`Dataloader` returned 0 length.'
    ValueError: `Dataloader` returned 0 length. Please make sure that your Dataloader at least returns 1 batch
    
    opened by JoanFM 2
  • small difference between paper and code about token type embedding

    small difference between paper and code about token type embedding

    Thanks for your paper and code, it helps me a lot. There is a small problem that makes me feel confused. In your paper 3.1, the text embedding consists of word embedding, position embedding, and modal-type embedding. vilt-3 1

    while in the source code of vilt/modules/vilt_module.py, the text_embedding is implemented by:

    from transformers.models.bert.modeling_bert import BertConfig, BertEmbeddings
    ...
      self.text_embeddings = BertEmbeddings(bert_config)
    

    and an extra token_type embedding self.token_type_embeddings = nn.Embedding(2, config["hidden_size"]) As I know, BertEmbedding() already contains a token type embedding operation inside, so there are actually two token type embedding for text input, and one token type embedding for image input. I know the self.token_type_embeddings is used as the modal_type embedding to distinguish between image and text. Is it a mistake? Is it ok not to remove the token type embedding inside BertEmbeddings(bert_config)? Will it cause any difference? Hope for your reply, thanks!

    opened by AAbathur 2
  • pyarrow.lib.ArrowInvalid: Not an Arrow file

    pyarrow.lib.ArrowInvalid: Not an Arrow file

    While running the filetuning command for vqav2 using following command:

    python run.py with data_root=/data2/dsets/dataset num_gpus=8 num_nodes=1 task_finetune_vqa_randaug per_gpu_batchsize=64 load_path="weights/vilt_200k_mlm_itm.ckpt"

    I'm encountering the following error:

    WARNING - root - Changed type of config entry "max_steps" from int to NoneType WARNING - ViLT - No observers have been added to this run INFO - ViLT - Running command 'main' INFO - ViLT - Started Global seed set to 0 INFO - lightning - Global seed set to 0 GPU available: True, used: True INFO - lightning - GPU available: True, used: True TPU available: None, using: 0 TPU cores INFO - lightning - TPU available: None, using: 0 TPU cores Using environment variable NODE_RANK for node rank (0). INFO - lightning - Using environment variable NODE_RANK for node rank (0). LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] INFO - lightning - LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] Using native 16bit precision. INFO - lightning - Using native 16bit precision. Missing logger folder: result/finetune_vqa_randaug_seed0_from_vilt_200k_mlm_itm WARNING - lightning - Missing logger folder: result/finetune_vqa_randaug_seed0_from_vilt_200k_mlm_itm Global seed set to 0 INFO - lightning - Global seed set to 0 initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/1 INFO - lightning - initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/1 INFO - torch.distributed.distributed_c10d - Added key: store_based_barrier_key:1 to store for rank: 0 INFO - torch.distributed.distributed_c10d - Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 1 nodes. ERROR - ViLT - Failed after 0:00:06! Traceback (most recent call last): File "/home/imt2018525/.local/lib/python3.8/site-packages/sacred/experiment.py", line 312, in run_commandline return self.run( File "/home/imt2018525/.local/lib/python3.8/site-packages/sacred/experiment.py", line 276, in run run() File "/home/imt2018525/.local/lib/python3.8/site-packages/sacred/run.py", line 238, in call self.result = self.main_function(*args) File "/home/imt2018525/.local/lib/python3.8/site-packages/sacred/config/captured_function.py", line 42, in captured_function result = wrapped(*args, **kwargs) File "run.py", line 71, in main trainer.fit(model, datamodule=dm) File "/home/imt2018525/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 473, in fit results = self.accelerator_backend.train() File "/home/imt2018525/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 152, in train results = self.ddp_train(process_idx=self.task_idx, model=model) File "/home/imt2018525/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 268, in ddp_train self.trainer.call_setup_hook(model) File "/home/imt2018525/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 859, in call_setup_hook self.datamodule.setup(stage_name) File "/home/imt2018525/.local/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*args, **kwargs) File "/home/imt2018525/ViLT/vilt/datamodules/multitask_datamodule.py", line 34, in setup dm.setup(stage) File "/home/imt2018525/.local/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*args, **kwargs) File "/home/imt2018525/ViLT/vilt/datamodules/vqav2_datamodule.py", line 19, in setup super().setup(stage) File "/home/imt2018525/ViLT/vilt/datamodules/datamodule_base.py", line 138, in setup self.set_val_dataset() File "/home/imt2018525/ViLT/vilt/datamodules/datamodule_base.py", line 88, in set_val_dataset self.val_dataset = self.dataset_cls( File "/home/imt2018525/ViLT/vilt/datasets/vqav2_dataset.py", line 16, in init super().init( File "/home/imt2018525/ViLT/vilt/datasets/base_dataset.py", line 43, in init tables = [ File "/home/imt2018525/ViLT/vilt/datasets/base_dataset.py", line 44, in pa.ipc.RecordBatchFileReader( File "/home/imt2018525/.local/lib/python3.8/site-packages/pyarrow/ipc.py", line 94, in init self._open(source, footer_offset=footer_offset) File "pyarrow/ipc.pxi", line 624, in pyarrow.lib._RecordBatchFileReader._open File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Not an Arrow file

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "run.py", line 11, in def main(_config): File "/home/imt2018525/.local/lib/python3.8/site-packages/sacred/experiment.py", line 190, in automain self.run_commandline() File "/home/imt2018525/.local/lib/python3.8/site-packages/sacred/experiment.py", line 347, in run_commandline print_filtered_stacktrace() File "/home/imt2018525/.local/lib/python3.8/site-packages/sacred/utils.py", line 493, in print_filtered_stacktrace print(format_filtered_stacktrace(filter_traceback), file=sys.stderr) File "/home/imt2018525/.local/lib/python3.8/site-packages/sacred/utils.py", line 528, in format_filtered_stacktrace return "".join(filtered_traceback_format(tb_exception)) File "/home/imt2018525/.local/lib/python3.8/site-packages/sacred/utils.py", line 568, in filtered_traceback_format current_tb = tb_exception.exc_traceback AttributeError: 'TracebackException' object has no attribute 'exc_traceback'

    I am not sure it's the issue with the pyarrow version. Can someone help me resolve this error? Thanks in advance.

    opened by psrimanreddy 0
  • Mistakes in vqa_dict.json ?

    Mistakes in vqa_dict.json ?

    Hey, I runned demo_vqa.py and did something more. And what I found is that the "ids" in vqa_dict.json (which is download from this URL:"https://github.com/dandelin/ViLT/releases/download/200k/" in the file demo_vqa.py ) misses the id :"125". That means the id jumps from "124" to "126", which caused some bugs . Can you please check the issue and tell me what's the original answer pair with the id "125" ? Thanks a lot !

    opened by Rom-Worker 0
  • The problem of fine-flickr30k

    The problem of fine-flickr30k

    Hello, what configuration does your vilt use when fine-tuning flickr30k? Eight gpus? What is the memory of each gpu? What is the result of fine-tuning flickr30k? Is it a weight file?

    opened by wuqiang12345 0
  • pretrain datasets

    pretrain datasets

    Hello, the author, great work! As time goes by, a lot of image urls in the dataset become invalid. Is there any solution? Could you provide the data arrow?

    opened by mactavish91 0
  • Question about train on coco dataset

    Question about train on coco dataset

    Traceback (most recent calls WITHOUT Sacred internals): File "run.py", line 71, in main trainer.fit(model, datamodule=dm) File "/home/amax/anaconda3/envs/vilt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 473, in fit results = self.accelerator_backend.train() File "/home/amax/anaconda3/envs/vilt/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 152, in train results = self.ddp_train(process_idx=self.task_idx, model=model) File "/home/amax/anaconda3/envs/vilt/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 268, in ddp_train self.trainer.call_setup_hook(model) File "/home/amax/anaconda3/envs/vilt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 859, in call_setup_hook self.datamodule.setup(stage_name) File "/home/amax/anaconda3/envs/vilt/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*args, **kwargs) File "/data/zjw/ViLT/vilt/datamodules/multitask_datamodule.py", line 34, in setup dm.setup(stage) File "/home/amax/anaconda3/envs/vilt/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*args, **kwargs) File "/data/zjw/ViLT/vilt/datamodules/datamodule_base.py", line 137, in setup self.set_train_dataset() File "/data/zjw/ViLT/vilt/datamodules/datamodule_base.py", line 76, in set_train_dataset self.train_dataset = self.dataset_cls( File "/data/zjw/ViLT/vilt/datasets/coco_caption_karpathy_dataset.py", line 17, in init super().init(*args, **kwargs, names=names, text_column_name="caption") File "/data/zjw/ViLT/vilt/datasets/base_dataset.py", line 53, in init self.table_names += [name] * len(tables[i]) IndexError: list index out of range

    opened by Bmilab22 0
Owner
Wonjae Kim
Wonjae Kim
TrTr: Visual Tracking with Transformer

TrTr: Visual Tracking with Transformer We propose a novel tracker network based on a powerful attention mechanism called Transformer encoder-decoder a

趙 漠居(Zhao, Moju) 66 Dec 27, 2022
CSE-519---Project - Job Title Analysis (Project for CSE 519 - Data Science Fundamentals)

A Multifaceted Approach to Job Title Analysis CSE 519 - Data Science Fundamentals Project Description Project consists of three parts: Salary Predicti

Jimit Dholakia 1 Jan 04, 2022
Implementation of Memory-Efficient Neural Networks with Multi-Level Generation, ICCV 2021

Memory-Efficient Multi-Level In-Situ Generation (MLG) By Jiaqi Gu, Hanqing Zhu, Chenghao Feng, Mingjie Liu, Zixuan Jiang, Ray T. Chen and David Z. Pan

Jiaqi Gu 2 Jan 04, 2022
[CVPR 2022] TransEditor: Transformer-Based Dual-Space GAN for Highly Controllable Facial Editing

TransEditor: Transformer-Based Dual-Space GAN for Highly Controllable Facial Editing (CVPR 2022) This repository provides the official PyTorch impleme

Billy XU 128 Jan 03, 2023
The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution.

WSRGlow The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution. Audio sa

Kexun Zhang 96 Jan 03, 2023
Generate Cartoon Images using Generative Adversarial Network

AvatarGAN ✨ Generate Cartoon Images using DC-GAN Deep Convolutional GAN is a generative adversarial network architecture. It uses a couple of guidelin

Aakash Jhawar 50 Dec 29, 2022
Official implementation of ETH-XGaze dataset baseline

ETH-XGaze baseline Official implementation of ETH-XGaze dataset baseline. ETH-XGaze dataset ETH-XGaze dataset is a gaze estimation dataset consisting

Xucong Zhang 134 Jan 03, 2023
PEPit is a package enabling computer-assisted worst-case analyses of first-order optimization methods.

PEPit: Performance Estimation in Python This open source Python library provides a generic way to use PEP framework in Python. Performance estimation

Baptiste 53 Nov 16, 2022
Deep Learning and Logical Reasoning from Data and Knowledge

Logic Tensor Networks (LTN) Logic Tensor Network (LTN) is a neurosymbolic framework that supports querying, learning and reasoning with both rich data

171 Dec 29, 2022
Code of Adverse Weather Image Translation with Asymmetric and Uncertainty aware GAN

Adverse Weather Image Translation with Asymmetric and Uncertainty-aware GAN (AU-GAN) Official Tensorflow implementation of Adverse Weather Image Trans

Jeong-gi Kwak 36 Dec 26, 2022
Computer Vision Paper Reviews with Key Summary of paper, End to End Code Practice and Jupyter Notebook converted papers

Computer-Vision-Paper-Reviews Computer Vision Paper Reviews with Key Summary along Papers & Codes. Jonathan Choi 2021 The repository provides 100+ Pap

Jonathan Choi 2 Mar 17, 2022
HybVIO visual-inertial odometry and SLAM system

HybVIO A visual-inertial odometry system with an optional SLAM module. This is a research-oriented codebase, which has been published for the purposes

Spectacular AI 320 Jan 03, 2023
MQBench: Towards Reproducible and Deployable Model Quantization Benchmark

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark We propose a benchmark to evaluate different quantization algorithms on vari

494 Dec 29, 2022
Set of models for classifcation of 3D volumes

Classification models 3D Zoo - Keras and TF.Keras This repository contains 3D variants of popular CNN models for classification like ResNets, DenseNet

69 Dec 28, 2022
ISTR: End-to-End Instance Segmentation with Transformers (https://arxiv.org/abs/2105.00637)

This is the project page for the paper: ISTR: End-to-End Instance Segmentation via Transformers, Jie Hu, Liujuan Cao, Yao Lu, ShengChuan Zhang, Yan Wa

Jie Hu 182 Dec 19, 2022
Bio-OFC gym implementation and Gym-Fly environment

Bio-OFC gym implementation and Gym-Fly environment This repository includes the gym compatible implementation of the Bio-OFC algorithm from the paper

Siavash Golkar 1 Nov 16, 2021
Event queue (Equeue) dialect is an MLIR Dialect that models concurrent devices in terms of control and structure.

Event Queue Dialect Event queue (Equeue) dialect is an MLIR Dialect that models concurrent devices in terms of control and structure. Motivation The m

Cornell Capra 23 Dec 08, 2022
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Leveraging Social Influence based on Users Activity Centers for Point-of-Interest Recommendation

SUCP Leveraging Social Influence based on Users Activity Centers for Point-of-Interest Recommendation () Direct Friends (i.e., users who follow each o

Kosar 8 Nov 26, 2022