UniLM AI - Large-scale Self-supervised Pre-training across Tasks, Languages, and Modalities

Overview

UniLM AI

Pre-trained (foundation) models across tasks (understanding, generation and translation), languages (100+ languages), and modalities (language, image, audio, vision + language, audio + language, etc.)

The family of UniLM AI:

UniLM ([email protected]'19 | [email protected]'20 | [email protected]'21): unified pre-training for language understanding and generation

InfoXLM ([email protected]'21 | [email protected]'21): multilingual/cross-lingual pre-trained models for 100+ languages

DeltaLM (NEW): encoder-decoder pre-training for language generation and translation for 100+ languages

MiniLM ([email protected]'20 | [email protected]'21): small and fast pre-trained models for language understanding and generation

AdaLM ([email protected]'21): domain, language, and task adaptation of pre-trained models

LayoutLM ([email protected]'20 | [email protected]'21): multimodal (text + layout/format + image) pre-training for Document AI (e.g. scanned documents, PDF, etc.)

LayoutXLM (NEW): multimodal (text + layout/format + image) pre-training for multilingual document understanding

LayoutReader (EMNLP'21): Pre-training of text and layout for reading order detection

BEiT (NEW): BERT Pre-Training of Image Transformers

UniSpeech ([email protected]'21): Speech Pre-Training for ASR and TTS

s2s-ft: sequence-to-sequence fine-tuning toolkit

XLM-T (NEW): Multilingual NMT w/ pretrained cross-lingual encoders

News

  • August 2021: LayoutLMv2 and LayoutXLM are on HuggingFace
  • [Model Release] August, 2021: LayoutReader - Built with LayoutLM to improve general reading order detection.
  • [Model Release] August, 2021: DeltaLM - Encoder-decoder pre-training for language generation and translation.
  • August 2021: BEiT is on HuggingFace
  • [Model Release] July, 2021: BEiT - Towards BERT moment for CV
  • [Model Release] June, 2021: LayoutLMv2, LayoutXLM, MiniLMv2, and AdaLM.
  • May, 2021: LayoutLMv2, InfoXLMv2, MiniLMv2, UniLMv3, and AdaLM were accepted by ACL 2021.
  • April, 2021: LayoutXLM is coming by extending the LayoutLM into multilingual support! A multilingual form understanding benchmark XFUND is also introduced, which includes forms with human labeled key-value pairs in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese).
  • March, 2021: InfoXLM was accepted by NAACL 2021.
  • December 29th, 2020: LayoutLMv2 is coming with the new SOTA on a wide varierty of document AI tasks, including DocVQA and SROIE leaderboard.
  • October 8th, 2020: T-ULRv2 (aka InfoXLM) as the SOTA on the XTREME leaderboard. // Blog
  • September, 2020: MiniLM was accepted by NeurIPS 2020.
  • July 16, 2020: InfoXLM (Multilingual UniLM) arXiv
  • June, 2020: UniLMv2 was accepted by ICML 2020; LayoutLM was accepted by KDD 2020.
  • April 5, 2020: Multilingual MiniLM released!
  • September, 2019: UniLMv1 was accepted by NeurIPS 2019.

Release

***** New August, 2021: LayoutReader release *****

***** New August, 2021: DeltaLM release *****

***** New July, 2021: BEiT release *****

***** New June, 2021: LayoutXLM | AdaLM | MiniLMv2 release *****

***** New May, 2021: LayoutLMv2 | LayoutXLM release *****

  • LayoutLM 2.0 (December 29, 2020): multimodal pre-training for visually-rich document understanding by leveraging text, layout and image information in a single framework. It is coming with new SOTA on a wide range of document understanding tasks, including FUNSD (0.7895 -> 0.8420), CORD (0.9493 -> 0.9601), SROIE (0.9524 -> 0.9781), Kleister-NDA (0.834 -> 0.852), RVL-CDIP (0.9443 -> 0.9564), and DocVQA (0.7295 -> 0.8672). "LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding ACL 2021"

***** February, 2020: UniLM v2 | MiniLM v1 | LayoutLM v1 | s2s-ft v1 release *****

***** October 1st, 2019: UniLM v1 release *****

License

This project is licensed under the license found in the LICENSE file in the root directory of this source tree. Portions of the source code are based on the transformers project.

Microsoft Open Source Code of Conduct

Contact Information

For help or issues using UniLM AI models, please submit a GitHub issue.

For other communications related to UniLM AI, please contact Li Dong ([email protected]), Furu Wei ([email protected]).

Comments
  • [LayoutLM] How to reproduce FUNSD result

    [LayoutLM] How to reproduce FUNSD result

    Hello, I have run fine tuning for the Sequence Labeling Task with FUNSD dataset but my I couldn't achieve the result presented in the paper (precision is only 40%), here are some scripts and log that I used, any idea about what could be wrong? Thank you very much. Training:

    #!/bin/bash
    
    python run_seq_labeling.py  --data_dir ~/mnt/data \
                                --model_type layoutlm \
                                --model_name_or_path ~/mnt/model \
                                --do_lower_case \
                                --max_seq_length 512 \
                                --do_train \
                                --num_train_epochs 100.0 \
                                --logging_steps 10 \
                                --save_steps -1 \
                                --output_dir ~/mnt/output \
                                --labels ~/mnt/data/labels.txt \
                                --per_gpu_train_batch_size 16 \
                                --fp16
    

    Testing:

    #!/bin/bash
    
    python run_seq_labeling.py --do_predict\
      --model_type layoutlm\
      --model_name_or_path ~/mnt/model\
      --data_dir ~/mnt/data\
      --output_dir ~/mnt/output\
      --labels ~/mnt/data/labels.txt
    

    Some log:

    05/14/2020 09:40:45 - INFO - __main__ -   ***** Running training *****
    05/14/2020 09:40:45 - INFO - __main__ -     Num examples = 150
    05/14/2020 09:40:45 - INFO - __main__ -     Num Epochs = 100
    05/14/2020 09:40:45 - INFO - __main__ -     Instantaneous batch size per GPU = 16
    05/14/2020 09:40:45 - INFO - __main__ -     Total train batch size (w. parallel, distributed & accumulation) = 16
    05/14/2020 09:40:45 - INFO - __main__ -     Gradient Accumulation steps = 1
    05/14/2020 09:40:45 - INFO - __main__ -     Total optimization steps = 1000
    05/14/2020 09:53:00 - INFO - __main__ -    global_step = 1000, average loss = 0.10387736940692412
    
    05/14/2020 10:17:07 - INFO - __main__ -   ***** Running evaluation  *****
    05/14/2020 10:17:07 - INFO - __main__ -     Num examples = 52
    05/14/2020 10:17:07 - INFO - __main__ -     Batch size = 8
    05/14/2020 10:17:07 - INFO - __main__ -   
               precision    recall  f1-score   support
    
     QUESTION       0.41      0.70      0.52       771
       HEADER       0.00      0.00      0.00       108
       ANSWER       0.39      0.50      0.44       513
    
    micro avg       0.40      0.57      0.47      1392
    macro avg       0.37      0.57      0.45      1392
    
    05/14/2020 10:17:07 - INFO - __main__ -   ***** Eval results  *****
    05/14/2020 10:17:07 - INFO - __main__ -     f1 = 0.472115668338743
    05/14/2020 10:17:07 - INFO - __main__ -     loss = 2.9291565077645436
    05/14/2020 10:17:07 - INFO - __main__ -     precision = 0.400600901352028
    05/14/2020 10:17:07 - INFO - __main__ -     recall = 0.5747126436781609
    
    opened by nv-quan 17
  • LayoutLMv2 code release

    LayoutLMv2 code release

    Thanks for sharing the great work! I was wondering if there is an expected date on when you will be releasing your code and pre-trained models for LayoutLMv2.

    opened by rtanaka-lab 14
  • Unable to get vaild value from layoutlm model

    Unable to get vaild value from layoutlm model

    Hi there,

    Thank you for your works very much and i'd like to use the LayoutLM network to try labeling in seq but I'm in trouble due to the question detail as follow:

    for the env, i try to setup as readme.md but in the command pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ there is a trouble so i use my env existed which also meet the requirement;

    for the data, i download the FUNSD and preprocessing its testing_data follow the readme.md;

    for the main, I run it in the run_seq_labeling.py but not command after preprocessing.I can finish the process of training but can not for the one of predicting. my conf is equivalent to the follow command: python run_seq_labeling.py --data_dir data
    --model_type layoutlm
    --model_name_or_path layout-large-uncased \ (Downloaded form google drive) --output_dir out
    --labels data/labels.txt
    --do_predict
    --do_lower_case
    --overwrite_output_dir
    and other command is not change from default. It will be error in line 357: eval_loss += tmp_eval_loss.item()

    and the error is RuntimeError: CUDA error: device-side assert triggered

    I debug it and this error may be created from line 349: outputs = model(input)

    due to the input is a dict of tensor contains {input_ids, attention_mask, labels, bbox, token_type_ids} but output is a tuple which contains 2 tensor but their data are Unable to get repr for 'torch.Tensor';

    i run it as I understand it from paper but i do not know whether it is correct, and i have spend a long time in it but get none result, could you please provide some help for me. sincerely thanks;

    opened by NancyNozomi 14
  • [unused99] occurs two times in unilm1.2-base-uncased-vocab.txt

    [unused99] occurs two times in unilm1.2-base-uncased-vocab.txt

    when running s2s-ft, [unused99] occurs two times in unilm1.2-base-uncased-vocab.txt, which makes incorrect seq2seq_loader.Preprocess4Seq2seqDecoder.vocab_words

    opened by GentleSmile 14
  • Question Generation example for S2S-FT

    Question Generation example for S2S-FT

    Hi Li Dong Thanks a lot for your code and the S2S-FT example. You gave us in the README abstractive summarization usage, could you please also give Question Generation usage example ? Thanks in advance Philippe PS: I have another question, but better on a separate issue for tracking

    opened by Neuronys 13
  • Issue in the backbone of LayoutLMv2

    Issue in the backbone of LayoutLMv2

    Describe the bug Model I am using: LayoutLMv2

    While trying to train LayoutLMv2, I got an error saying,

    ~/orbi/unilm/layoutlmft/layoutlmft/models/layoutlmv2/modeling_layoutlmv2.py in forward(self, images)
        605 
        606     def forward(self, images):
    --> 607         images_input = (images.tensor - self.pixel_mean) / self.pixel_std
        608         features = self.backbone(images_input)
        609         features = features[self.out_feature_key]
    
    AttributeError: 'Tensor' object has no attribute 'tensor'
    

    I tried running it without .tensor and it worked fine.

    Please let me know if there is any mistake in the way I am passing the images. Otherwise, I can open a PR to remove .tensor.

    Thanks!

    cc: @ranpox

    opened by rushabh-v 12
  • KeyError: 'layoutlmv2' in AutoTokenizer

    KeyError: 'layoutlmv2' in AutoTokenizer

    I get key error, when I try to run AutoTokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased") same is the case even when I download the files to local and run the above with path to the config folder

    I also do not find layoutlmv2 in the AutoTokenizer.from_pretrained Documentation.

    Any leads on how to use it the right way would be helpful

    opened by sindhuattaiger 12
  • Is transformers LayoutLM really working?

    Is transformers LayoutLM really working?

    Getting endless erros when trying to use the LayoutLMForTokenClassification from transformers for NER task, is just me doing wrong or the function still on work? Really appreciate if you can give some information.

    opened by shaonanqinghuaizongshishi 12
  • RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`

    RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`

    Hi, I was using run_classification.py with the base model of layoutlm for my own document set. However, I came across this error:

    04/25/2020 09:26:55 - WARNING - __main__ -   Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False
    04/25/2020 09:26:55 - INFO - transformers.configuration_utils -   loading configuration file /home/ubuntu/rwik_xx_document_classification/unilm/layoutlm-base-uncased/config.json
    04/25/2020 09:26:55 - INFO - transformers.configuration_utils -   Model config {
      "attention_probs_dropout_prob": 0.1,
      "finetuning_task": "cdip",
      "hidden_act": "gelu",
      "hidden_dropout_prob": 0.1,
      "hidden_size": 768,
      "initializer_range": 0.02,
      "intermediate_size": 3072,
      "is_decoder": false,
      "layer_norm_eps": 1e-12,
      "max_2d_position_embeddings": 1024,
      "max_position_embeddings": 512,
      "num_attention_heads": 12,
      "num_hidden_layers": 12,
      "num_labels": 2,
      "output_attentions": false,
      "output_hidden_states": false,
      "output_past": true,
      "pruned_heads": {},
      "torchscript": false,
      "type_vocab_size": 2,
      "use_bfloat16": false,
      "vocab_size": 30522
    }
    
    04/25/2020 09:26:55 - INFO - transformers.tokenization_utils -   Model name '/home/ubuntu/rwik-xx/unilm/layoutlm-base-uncased/' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). Assuming '/home/ubuntu/rwik-xx/unilm/layoutlm-base-uncased/' is a path or url to a directory containing tokenizer files.
    04/25/2020 09:26:55 - INFO - transformers.tokenization_utils -   Didn't find file /home/ubuntu/rwik-xx/unilm/layoutlm-base-uncased/added_tokens.json. We won't load it.
    04/25/2020 09:26:55 - INFO - transformers.tokenization_utils -   loading file /home/ubuntu/rwik-xx/unilm/layoutlm-base-uncased/vocab.txt
    04/25/2020 09:26:55 - INFO - transformers.tokenization_utils -   loading file None
    04/25/2020 09:26:55 - INFO - transformers.tokenization_utils -   loading file /home/ubuntu/rwik-xx/unilm/layoutlm-base-uncased/special_tokens_map.json
    04/25/2020 09:26:55 - INFO - transformers.tokenization_utils -   loading file /home/ubuntu/rwik-xx/unilm/layoutlm-base-uncased/tokenizer_config.json
    04/25/2020 09:26:55 - INFO - transformers.modeling_utils -   loading weights file /home/ubuntu/rwik-xx/unilm/layoutlm-base-uncased/pytorch_model.bin
    04/25/2020 09:27:16 - INFO - transformers.modeling_utils -   Weights of LayoutLMForSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
    04/25/2020 09:27:16 - INFO - transformers.modeling_utils -   Weights from pretrained model not used in LayoutLMForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight']
    04/25/2020 09:27:19 - INFO - __main__ -   Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir='/home/ubuntu/rwik-xx/xx_pdftotext_html_output/', dev_folder='trial_24_04_test', device=device(type='cuda', index=0), do_eval=True, do_lower_case=True, do_train=True, eval_all_checkpoints=True, evaluate_during_training=True, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=2, hierarchical_tokens=False, learning_rate=5e-05, local_rank=-1, logging_steps=100, max_grad_norm=1.0, max_seq_length=448, max_steps=10000, model_name_or_path='/home/ubuntu/rwik_xx_document_classification/unilm/layoutlm-base-uncased/', model_type='layoutlm', n_gpu=1, no_cuda=False, nuance_mode=True, num_train_epochs=4.0, output_dir='/home/ubuntu/rwik_xx_document_classification/weight_files/1_layout_lm_base_24_04/', output_mode='classification', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=16, per_gpu_train_batch_size=8, save_steps=100, seed=566, server_ip='', server_port='', stride_len=112, task_name='cdip', tokenizer_name='', tpu=False, tpu_ip_address='', tpu_name='', tqdm_notebook_mode=False, train_folder='trial_24_04_train', warmup_steps=500, weight_decay=0.0, xrt_tpu_config='')
    04/25/2020 09:27:19 - INFO - __main__ -   Creating features from dataset file at /home/ubuntu/rwik-xx/xx_pdftotext_html_output/
    Gettting train examples: 100%|██████████| 789/789 [00:26<00:00, 29.34it/s]
      0%|          | 0/789 [00:00<?, ?it/s]04/25/2020 09:27:46 - INFO - utils_classification -   *** Example ***
    04/25/2020 09:27:46 - INFO - utils_classification -   guid: train-1
    04/25/2020 09:27:46 - INFO - utils_classification -   input_ids: 101 9986 2271 23773 11255 8909 1024 5709... ( truncated )
    04/25/2020 09:27:46 - INFO - utils_classification -   bboxes: [0, 0, 0, 0] [0, 0, 0, 0] [55, 44, 109, 53] [55, 44, 109, 53] [55, 44, 109, 53] [112, 44, 164, 53] ......[1000, 1000, 1000, 1000] ( truncated )
    04/25/2020 09:27:46 - INFO - utils_classification -   attention_mask: 1 1 1 1 1 1 1 1 1 1 1 ... ( truncated )
    04/25/2020 09:27:46 - INFO - utils_classification -   token_type_ids: 0 0 0 0 0 0 0 0 0 0 0 ... ( truncated )
    04/25/2020 09:27:46 - INFO - utils_classification -   label: xxx (id = 1)
    04/25/2020 09:27:46 - INFO - utils_classification -   *** Example ***
    .
    .
    .
    (truncated)
    
    100%|██████████| 789/789 [00:30<00:00, 26.17it/s]
    04/25/2020 09:28:16 - INFO - __main__ -   Saving features into cached file /home/ubuntu/rwik_xx/xx_pdftotext_html_output/cached_train_layoutlm-base-uncased_448_cdip
    04/25/2020 09:28:19 - INFO - __main__ -   ***** Running training *****
    04/25/2020 09:28:19 - INFO - __main__ -     Num examples = 1929
    04/25/2020 09:28:19 - INFO - __main__ -     Num Epochs = 83
    04/25/2020 09:28:19 - INFO - __main__ -     Instantaneous batch size per GPU = 8
    04/25/2020 09:28:19 - INFO - __main__ -     Total train batch size (w. parallel, distributed & accumulation) = 16
    04/25/2020 09:28:19 - INFO - __main__ -     Gradient Accumulation steps = 2
    04/25/2020 09:28:19 - INFO - __main__ -     Total optimization steps = 10000
    Epoch:   0%|          | 0/83 [00:00<?, ?it/s]
    Iteration:   0%|          | 0/242 [00:00<?, ?it/s]
    Epoch:   0%|          | 0/83 [00:00<?, ?it/s]
    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    /home/ubuntu/rwik_xx_document_classification/unilm/layoutlm/run_classification.py in <module>()
        925 
        926 if __name__ == "__main__":
    --> 927     main()
    
    /home/ubuntu/rwik_xx_document_classification/unilm/layoutlm/run_classification.py in main()
        859             args, args.task_name, tokenizer, evaluate=False
        860         )
    --> 861         global_step, tr_loss = train(args, train_dataset, model, tokenizer)
        862         logger.info(" global_step = %s, average loss = %s", global_step, tr_loss)
        863 
    
    /home/ubuntu/rwik_xx_document_classification/unilm/layoutlm/run_classification.py in train(args, train_dataset, model, tokenizer)
        217                 )  # XLM, DistilBERT and RoBERTa don't use segment_ids
        218             #pdb.set_trace()
    --> 219             outputs = model(**inputs)
        220             loss = outputs[
        221                 0
    
    /home/ubuntu/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
        548             result = self._slow_forward(*input, **kwargs)
        549         else:
    --> 550             result = self.forward(*input, **kwargs)
        551         for hook in self._forward_hooks.values():
        552             hook_result = hook(self, input, result)
    
    /home/ubuntu/rwik_xx_document_classification/unilm/layoutlm/modeling_layoutlm.py in forward(self, input_ids, bbox, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels)
        328             token_type_ids=token_type_ids,
        329             position_ids=position_ids,
    --> 330             head_mask=head_mask,
        331         )
        332 
    
    /home/ubuntu/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
        548             result = self._slow_forward(*input, **kwargs)
        549         else:
    --> 550             result = self.forward(*input, **kwargs)
        551         for hook in self._forward_hooks.values():
        552             hook_result = hook(self, input, result)
    
    /home/ubuntu/rwik_xx_document_classification/unilm/layoutlm/modeling_layoutlm.py in forward(self, input_ids, bbox, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask)
        191         )
        192         encoder_outputs = self.encoder(
    --> 193             embedding_output, extended_attention_mask, head_mask=head_mask
        194         )
        195         sequence_output = encoder_outputs[0]
    
    /home/ubuntu/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
        548             result = self._slow_forward(*input, **kwargs)
        549         else:
    --> 550             result = self.forward(*input, **kwargs)
        551         for hook in self._forward_hooks.values():
        552             hook_result = hook(self, input, result)
    
    /home/ubuntu/.local/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask)
        378                 all_hidden_states = all_hidden_states + (hidden_states,)
        379 
    --> 380             layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask)
        381             hidden_states = layer_outputs[0]
        382 
    
    /home/ubuntu/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
        548             result = self._slow_forward(*input, **kwargs)
        549         else:
    --> 550             result = self.forward(*input, **kwargs)
        551         for hook in self._forward_hooks.values():
        552             hook_result = hook(self, input, result)
    
    /home/ubuntu/.local/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask)
        349 
        350     def forward(self, hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None):
    --> 351         self_attention_outputs = self.attention(hidden_states, attention_mask, head_mask)
        352         attention_output = self_attention_outputs[0]
        353         outputs = self_attention_outputs[1:]  # add self attentions if we output attention weights
    
    /home/ubuntu/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
        548             result = self._slow_forward(*input, **kwargs)
        549         else:
    --> 550             result = self.forward(*input, **kwargs)
        551         for hook in self._forward_hooks.values():
        552             hook_result = hook(self, input, result)
    
    /home/ubuntu/.local/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask)
        303 
        304     def forward(self, hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None):
    --> 305         self_outputs = self.self(hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask)
        306         attention_output = self.output(self_outputs[0], hidden_states)
        307         outputs = (attention_output,) + self_outputs[1:]  # add attentions if we output them
    
    /home/ubuntu/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
        548             result = self._slow_forward(*input, **kwargs)
        549         else:
    --> 550             result = self.forward(*input, **kwargs)
        551         for hook in self._forward_hooks.values():
        552             hook_result = hook(self, input, result)
    
    /home/ubuntu/.local/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask)
        213 
        214     def forward(self, hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None):
    --> 215         mixed_query_layer = self.query(hidden_states)
        216 
        217         # If this is instantiated as a cross-attention module, the keys
    
    /home/ubuntu/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
        548             result = self._slow_forward(*input, **kwargs)
        549         else:
    --> 550             result = self.forward(*input, **kwargs)
        551         for hook in self._forward_hooks.values():
        552             hook_result = hook(self, input, result)
    
    /home/ubuntu/.local/lib/python3.6/site-packages/torch/nn/modules/linear.py in forward(self, input)
         85 
         86     def forward(self, input):
    ---> 87         return F.linear(input, self.weight, self.bias)
         88 
         89     def extra_repr(self):
    
    /home/ubuntu/.local/lib/python3.6/site-packages/torch/nn/functional.py in linear(input, weight, bias)
       1610         ret = torch.addmm(bias, input, weight.t())
       1611     else:
    -> 1612         output = input.matmul(weight.t())
       1613         if bias is not None:
       1614             output += bias
    
    RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
    

    I am using torch version 1.3.0 and transformers version version 2.2.1 and Python 3.

    Note that I have modified the loading process to suit the dataset format that my file is present in and I have verified that it is being loaded correctly. In addition to that, I tried running roberta-base through the same dataset and it worked.

    When I tried to run python debugger and check the value of the input variable, I got this error

    > /home/ubuntu/.local/lib/python3.6/site-packages/torch/nn/functional.py(1612)linear()
       1610         ret = torch.addmm(bias, input, weight.t())
       1611     else:
    -> 1612         output = input.matmul(weight.t())
       1613         if bias is not None:
       1614             output += bias
    
    ipdb> input
    *** RuntimeError: cuda runtime error (700) : an illegal memory access was encountered at /pytorch/aten/src/THC/THCCachingHostAllocator.cpp:278
    ipdb> weight
    *** RuntimeError: cuda runtime error (700) : an illegal memory access was encountered at /pytorch/aten/src/THC/THCCachingHostAllocator.cpp:278
    
    opened by rwikdutta 12
  • Question about pre-training BEiT-base on ImageNet-1k and then fine-tuning on ADE20K

    Question about pre-training BEiT-base on ImageNet-1k and then fine-tuning on ADE20K

    Describe Model I am using (UniLM, MiniLM, LayoutLM ...): BEIT I am try to reproducing self-supervised pre-training BEiT-base on ImageNet-1k and then fine-tuning on ADE20K, in your paper it will get mIoU 45.6, slightly higher than Supervised Pre-Training on ImageNet(45.3) and DINO(44.1), But i can not reproduce this result, i only get mIoU about 39.

    I wonder whether you can provide all the training commands and hyperparams for self-supervised pre-training BEiT-base on ImageNet-1k and then fine-tuning on ADE20K. Thanks~

    opened by laojiangwei 11
  • Can LayoutLM be used for commercial purpose?

    Can LayoutLM be used for commercial purpose?

    **Can LayoutLM be used for commercial purposes? And how LayoutLM license is different than other versions of LayoutLM (LayoutLMv2, LayoutLMFT, layoutXLM) Will license hold for both train model and code. Or one can use a trained model provide by other sources such as Docbank for commercial purposes.

    I have noticed that the LayoutLM folder is showing deprecated. What does that mean in terms of use and license? **

    Model I am using (LayoutLM):

    opened by ghost 11
  • ValueError: You must provide corresponding bounding boxes

    ValueError: You must provide corresponding bounding boxes

    Hi,

    I got one question when I try to use LayoutLMv2 to train on the FUNSD dataset. I follow the steps and I got the following issue ValueError: You must provide corresponding bounding boxes.

    I follow the command to train the LayoutLMv2 on FUNSD dataset.

    Here is the screenshot for the issue. Any advice will be appreciated! image

    opened by 14H034160212 0
  • Beit3 classification

    Beit3 classification

    Thanks for your inspiring and effective work. You achieved really great performance on imagenet-1k classification. As the paper mentioned, you treat classification as retrieval and do intermediate retrieval on imagenet-21k before fine-tuning. But we are wondering what you do for polysemy and same class names for different classes in the dataset imagenet-21k.

    Looking forward to your reply.

    opened by amandaluof 1
  • Could you please release the training pipeline of BEATs?

    Could you please release the training pipeline of BEATs?

    As the paper, the training pipeline/framework of BEATs is complex, and is the core contribution of it. @[email protected] @[email protected] Thank you!

    BEATs: Audio Pre-Training with Acoustic Tokenizers

    opened by hllmathcs 0
  • License for LayoutLMv2

    License for LayoutLMv2

    Hi @wolfshow We noticed that the license for LayoutLMv2 has been changed to Non-Commercial one. Would this license change apply to only model weights/code downloaded after sept 19 (the day when you changed the license)? Can people who downloaded it prior to the license modification date continue to use for commercial purpose?

    opened by riteshKumarUMass 1
  • Bump pillow from 8.3.1 to 9.3.0 in /vlmo

    Bump pillow from 8.3.1 to 9.3.0 in /vlmo

    Bumps pillow from 8.3.1 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • [DiT] Question regarding image resolutions on different tasks

    [DiT] Question regarding image resolutions on different tasks

    First of all, thanks for your general great work on document AI and specifically DiT in this case.

    In the paper, it says

    Since the image resolution for object detection tasks is much larger than classification, we limit the batch size to 16.

    which confuses me. If I understand correctly, when using a pretrained version of DiT, one is ALWAYS limited by the 224x224 image resolution, since this is constrained by the size of the patch embeddings (similar to how e.g. BERT-base simply can't go beyond 512 tokens due to the position embeddings). So regardless of the original size of the image, the input the model gets is always limited to this predefined 224x224.

    IF this reasoning is correct, then I cannot comprehend the logic behind resizing random crops of an image as described in the paper:

    Specifically, the input image is cropped with probability 0.5 to a random rectangular patch which is then resized again such that the shortest side is at least 480 and at most 800 pixels while the longest at most 1,333.

    Any clarification for this would be very much appreciated, thanks in advance!

    opened by mrvoh 1
Releases(s2s-ft.v0.3)
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
Accelerated Multi-Modal MR Imaging with Transformers

Accelerated Multi-Modal MR Imaging with Transformers Dependencies numpy==1.18.5 scikit_image==0.16.2 torchvision==0.8.1 torch==1.7.0 runstats==1.8.0 p

54 Dec 16, 2022
Official PyTorch implementation of Less is More: Pay Less Attention in Vision Transformers.

Less is More: Pay Less Attention in Vision Transformers Official PyTorch implementation of Less is More: Pay Less Attention in Vision Transformers. By

73 Jan 01, 2023
Official Implementation for Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation

Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation We present a generic image-to-image translation framework, pixel2style2pixel (pSp

2.8k Dec 30, 2022
BC3407-Group-5-Project - BC3407 Group Project With Python

BC3407-Group-5-Project As the world struggles to contain the ever-changing varia

1 Jan 26, 2022
MicRank is a Learning to Rank neural channel selection framework where a DNN is trained to rank microphone channels.

MicRank: Learning to Rank Microphones for Distant Speech Recognition Application Scenario Many applications nowadays envision the presence of multiple

Samuele Cornell 20 Nov 10, 2022
This is an official implementation for "AS-MLP: An Axial Shifted MLP Architecture for Vision".

AS-MLP architecture for Image Classification Model Zoo Image Classification on ImageNet-1K Network Resolution Top-1 (%) Params FLOPs Throughput (image

SVIP Lab 106 Dec 12, 2022
The official repository for "Revealing unforeseen diagnostic image features with deep learning by detecting cardiovascular diseases from apical four-chamber ultrasounds"

Revealing unforeseen diagnostic image features with deep learning by detecting cardiovascular diseases from apical four-chamber ultrasounds The why Im

3 Mar 29, 2022
Code for paper "Context-self contrastive pretraining for crop type semantic segmentation"

Code for paper "Context-self contrastive pretraining for crop type semantic segmentation" Setting up a python environment Follow the instruction in ht

Michael Tarasiou 11 Oct 09, 2022
Code for the paper "Location-aware Single Image Reflection Removal"

Location-aware Single Image Reflection Removal The shown images are provided by the datasets from IBCLN, ERRNet, SIR2 and the Internet images. The cod

72 Dec 08, 2022
PyTorch implementation of ICLR 2022 paper PiCO: Contrastive Label Disambiguation for Partial Label Learning

PiCO: Contrastive Label Disambiguation for Partial Label Learning This is a PyTorch implementation of ICLR 2022 Oral paper PiCO; also see our Project

王皓波 147 Jan 07, 2023
Code for binary and multiclass model change active learning, with spectral truncation implementation.

Model Change Active Learning Paper (To Appear) Python code for doing active learning in graph-based semi-supervised learning (GBSSL) paradigm. Impleme

Kevin Miller 1 Jul 24, 2022
Code for LIGA-Stereo Detector, ICCV'21

LIGA-Stereo Introduction This is the official implementation of the paper LIGA-Stereo: Learning LiDAR Geometry Aware Representations for Stereo-based

Xiaoyang Guo 75 Dec 09, 2022
Code for the paper “The Peril of Popular Deep Learning Uncertainty Estimation Methods”

Uncertainty Estimation Methods Code for the paper “The Peril of Popular Deep Learning Uncertainty Estimation Methods” Reference If you use this code,

EPFL Machine Learning and Optimization Laboratory 4 Apr 05, 2022
Transferable Unrestricted Attacks, which won 1st place in CVPR’21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet.

Transferable Unrestricted Adversarial Examples This is the PyTorch implementation of the Arxiv paper: Towards Transferable Unrestricted Adversarial Ex

equation 16 Dec 29, 2022
Transformers based fully on MLPs

Awesome MLP-based Transformers papers An up-to-date list of Transformers based fully on MLPs without attention! Why this repo? After transformers and

Fawaz Sammani 35 Dec 30, 2022
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Tadas Baltrusaitis 5.8k Dec 31, 2022
Voxel-based Network for Shape Completion by Leveraging Edge Generation (ICCV 2021, oral)

Voxel-based Network for Shape Completion by Leveraging Edge Generation This is the PyTorch implementation for the paper "Voxel-based Network for Shape

10 Dec 04, 2022
clustering moroccan stocks time series data using k-means with dtw (dynamic time warping)

Moroccan Stocks Clustering Context Hey! we don't always have to forecast time series am I right ? We use k-means to cluster about 70 moroccan stock pr

Ayman Lafaz 7 Oct 18, 2022
Tool cek opsi checkpoint facebook!

tool apa ini? cek_opsi_facebook adalah sebuah tool yang mengecek opsi checkpoint akun facebook yang terkena checkpoint! tujuan dibuatnya tool ini? too

Muhammad Latif Harkat 2 Jul 17, 2022
🕹️ Official Implementation of Conditional Motion In-betweening (CMIB) 🏃

Conditional Motion In-Betweening (CMIB) Official implementation of paper: Conditional Motion In-betweeening. Paper(arXiv) | Project Page | YouTube in-

Jihoon Kim 81 Dec 22, 2022