Unsupervised Language Modeling at scale for robust sentiment classification

Overview

** DEPRECATED **

This repo has been deprecated. Please visit Megatron-LM for our up to date Large-scale unsupervised pretraining and finetuning code.

If you would still like to use this codebase, see our tagged releases and install required software/dependencies that was available publicly at that date.

PyTorch Unsupervised Sentiment Discovery

This codebase contains pretrained binary sentiment and multimodel emotion classification models as well as code to reproduce results from our series of large scale pretraining + transfer NLP papers: Large Scale Language Modeling: Converging on 40GB of Text in Four Hours and Practical Text Classification With Large Pre-Trained Language Models. This effort was born out of a desire to reproduce, analyze, and scale the Generating Reviews and Discovering Sentiment paper from OpenAI.

The techniques used in this repository are general purpose and our easy to use command line interface can be used to train state of the art classification models on your own difficult classification datasets.

This codebase supports mixed precision training as well as distributed, multi-gpu, multi-node training for language models (support is provided based on the NVIDIA APEx project). In addition to training language models, this codebase can be used to easily transfer and finetune trained models on custom text classification datasets.

For example, a Transformer language model for unsupervised modeling of large text datasets, such as the amazon-review dataset, is implemented in PyTorch. We also support other tokenization methods, such as character or sentencepiece tokenization, and language models using various recurrent architectures.

The learned language model can be transferred to other natural language processing (NLP) tasks where it is used to featurize text samples. The featurizations provide a strong initialization point for discriminative language tasks, and allow for competitive task performance given only a few labeled samples. For example, we consider finetuning our models on the difficult task of multimodal emotion classification based on a subset of the plutchik wheel of emotions.

plutchik fig

Created by Robert Plutchik, this wheel is used to illustrate different emotions in a compelling and nuanced way. He suggested that there are 8 primary bipolar emotions (joy versus sadness, anger versus fear, trust versus disgust, and surprise versus anticipation) with different levels of emotional intensity. For our classification task we utilize tweets from the SemEval2018 Task 1E-c emotion classification dataset to perform multilabel classification of anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. This is a difficult task that suffers from real world classification problems such as class imbalance and labeler disagreement.

semeval results

On the full SemEval emotion classification dataset we find that finetuning our model on the data achieves competitive state of the art performance with no additional domain-specific feature engineering.

semeval leaderboard

ReadMe Contents

Setup

Install

Install the sentiment_discovery package with python3 setup.py install in order to run the modules/scripts within this repo.

Python Requirements

At this time we only support python3.

  • numpy
  • pytorch (>= 0.4.1)
  • pandas
  • scikit-learn
  • matplotlib
  • unidecode
  • sentencepiece
  • seaborn
  • emoji

Pretrained models

We've included our sentencepiece tokenizer model and vocab as a zip file:

We've included a transformer language model base as well as a 4096-d mlstm language model base. For examples on how to use these models please see our finetuning and transfer sections. Even though these models were trained with FP16 they can be used in FP32 training/inference.

We've also included classifiers trained on a subset of SemEval emotions corresponding to the 8 plutchik emotions (anger, anticipation, disgust, fear, joy, sadness, surprise, and trust):

Lastly, we've also included already trained classification models for SST and IMDB binary sentiment classification:

To use classification models that reproduce results from our original large batch language modeling paper please use the following commit hash and set of models.

We did not include pretrained models leveraging ELMo. To reproduce our papers' results with ELMo, please see our available resources.

Each file has a dictionary containing a PyTorch state_dict consisting of a language model (lm_encoder keys) trained on Amazon reviews and a classifier (classifier key) as well as accompanying args necessary to run a model with that state_dict.

Data Downloads

In the ./data folder we've provided processed copies of the Binary Stanford Sentiment Treebank (Binary SST), IMDB Movie Review, and the SemEval2018 Tweet Emotion datasets as part of this repository. In order to train on the amazon dataset please download the "aggressively deduplicated data" version from Julian McAuley's original site. Access requests to the dataset should be approved instantly. While using the dataset make sure to load it with the --loose-json flag.

Usage

In addition to providing easily reusable code of the core functionalities (models, distributed, fp16, etc.) of this work, we also provide scripts to perform the high-level functionalities of the original paper:

  • sentiment classification of input text
  • unsupervised reconstruction/language modeling of a corpus of text (+ script for launching distributed workers)
  • transfer of learned language model to perform sentiment analysis on a specified corpus
  • sampling from language model to generate text (possibly of fixed sentiment) + heatmap visualization of sentiment in text

Classifying text

Classify an input csv/json using one of our pretrained models or your own. Performs classification on Binary SST by default. Output classification probabilities are saved to a .npy file

python3 run_classifier.py --load_model ama_sst.pt                               # classify Binary SST
python3 run_classifier.py --load_model ama_sst_16.pt --fp16                     # run classification in fp16
python3 run_classifier.py --load_model ama_sst.pt --text-key <text-column> --data <path.csv>     # classify your own dataset

See here for more documentation.

Training Language Models (+ Distributed/FP16 Training)

Train a language model on a csv/json corpus. By default we train a weight-normalized, 4096-d mLSTM, with a 64-d character embedding. This is the first step of a 2-step process to training your own sentiment classifier. Saves model to lang_model.pt by default.

python3 pretrain.py                                                               #train a large model on imdb
python3 pretrain.py --model LSTM --nhid 512                                       #train a small LSTM instead
python3 pretrain.py --fp16 --dynamic-loss-scale                                   #train a model with fp16
python3 -m multiproc pretrain.py                                                  #distributed model training
python3 pretrain.py --data ./data/amazon/reviews.json --lazy --loose-json \       #train a model on amazon data
  --text-key reviewText --label-key overall --optim Adam --split 1000,1,1 
python3 pretrain.py --tokenizer-type SentencePieceTokenizer --vocab-size 32000 \  #train a model with our sentencepiece tokenization
  --tokenizer-type bpe --tokenizer-path ama_32k_tokenizer.model 
python3 pretrain.py --tokenizer-type SentencePieceTokenizer --vocab-size 32000 \  #train a transformer model with our sentencepiece tokenization
  --tokenizer-type bpe --tokenizer-path ama_32k_tokenizer.model --model transformer \
  --decoder-layers 12 --decoder-embed-dim 768 --decoder-ffn-embed-dim 3072 \
  --decoder-learned-pos --decoder-attention-heads 8
bash ./experiments/train_mlstm_singlenode.sh                                      #run our mLSTM training script on 1 DGX-1V
bash ./experiments/train_transformer_singlenode.sh                                #run our transformer training script on 1 DGX-1V 

For more documentation of our language modeling functionality look here

In order to learn about our language modeling experiments and reproduce results see the training reproduction section in analysis.

For information about how we achieve numerical stability with FP16 training see our fp16 training analysis.

Sentiment Transfer

Given a trained language model, this script will featurize text from train, val, and test csv/json's. It then uses sklearn logistic regression to fit a classifier to predict sentiment from these features. Lastly it performs feature selection to try and fit a regression model to the top n most relevant neurons (features). By default only one neuron is used for this second regression.

python3 transfer.py --load mlstm.pt                                 #performs transfer to SST, saves results to `<model>_transfer/` directory
python3 transfer.py --load mlstm.pt --neurons 5                     #use 5 neurons for the second regression
python3 transfer.py --load mlstm.pt --fp16                          #run model in fp16 for featurization step
bash ./experiments/run_sk_sst.sh                                    #run transfer learning with mlstm on imdb dataset
bash ./experiments/run_sk_imdb.sh                                   #run transfer learning with mlstm on sst dataset

Additional documentation of the command line arguments available for transfer can be found here

Classifier Finetuning

Given a trained language model and classification dataset, this script will build a classifier that leverages the trained language model as a text feature encoder. The difference between this script and transfer.py is that the model training is performed end to end: the loss from the classifier is backpropagated into the language model encoder as well. This script allows one to build more complex classification models, metrics, and loss functions than transfer.py. This script supports building arbitrary multilable, multilayer, and multihead perceptron classifiers. Additionally it allows using language modeling as an auxiliary task loss during training and multihead variance as an auxiliary loss during training. Lastly this script supports automatically selecting classification thresholds from validation performance. To measure validation performance this script includes more complex metrics including: f1-score, mathew correlation coefficient, jaccard index, recall, precision, and accuracy.

python3 finetune_classifier.py --load mlstm.pt --lr 2e-5 --aux-lm-loss --aux-lm-loss-weight .02   #finetune mLSTM model on sst (default dataset) with auxiliary loss
python3 finetune_classifier.py --load mlstm.pt --automatic-thresholding --threshold-metric f1     #finetune mLSTM model on sst and automatically select classification thresholds based on the validation f1 score
python3 finetune_classifier.py --tokenizer-type SentencePieceTokenizer --vocab-size 32000 \       #finetune transformer with sentencepiece on SST
  --tokenizer-type bpe tokenizer-path ama_32k_tokenizer.model --model transformer --lr 2e-5 \
  --decoder-layers 12 --decoder-embed-dim 768 --decoder-ffn-embed-dim 3072 \
  --decoder-learned-pos --decoder-attention-heads 8 --load transformer.pt --use-final-embed
python3 finetune_classifier.py --automatic-thresholding --non-binary-cols l1 l2 l3 --lr 2e-5\     #finetune multilayer classifier with 3 classes and 4 heads per class on some custom dataset and automatically select classfication thresholds
  --classifier-hidden-layers 2048 1024 3 --heads-per-class 4 --aux-head-variance-loss-weight 1.   #`aux-head-variance-loss-weight` is an auxiliary loss to increase the variance between each of the 4 head's weights
  --data <custom_train>.csv --val <custom_val>.csv --test <custom_test>.csv --load mlstm.pt
bash ./experiments/se_transformer_multihead.sh                                                    #finetune a multihead transformer on 8 semeval categories

See how to reproduce our finetuning experiments in the finetuning reproduction section of analysis.

Additional documentation of the command line arguments available for finetune_classifier.py can be found here

Analysis

Acknowledgement

A special thanks to our amazing summer intern Neel Kant for all the work he did with transformers, tokenization, and pretraining+finetuning classification models.

A special thanks to @csarofeen and @Michael Carilli for their help developing and documenting our RNN interface, Distributed Data Parallel model, and fp16 optimizer. The latest versions of these utilities can be found at the APEx github page.

Thanks to @guillitte for providing a lightweight pytorch port of openai's sentiment-neuron repo.

This project uses the amazon review dataset collected by J. McAuley

Thanks

Want to help out? Open up an issue with questions/suggestions or pull requests ranging from minor fixes to new functionality.

May your learning be Deep and Unsupervised.

Comments
  • MemoryError - Amazon dataset

    MemoryError - Amazon dataset

    Hi,

    I get memory error while trying to train on Amazon aggressively deduplicated data. I have 64 GB of memory installed on my system and a 1080ti installed.

    I run command inside an LXD container.

    [email protected]:~/work/sentiment-discovery# python3 main.py --data /home/adrianc/work/Sentiment/dataset/amazon/aggressive_dedup.json --lazy --loose_json --text_key reviewText --label_key overall --num_shards 1002 --optim Adam --split 1000,1,1
    configuring data
    Traceback (most recent call last):
      File "main.py", line 135, in <module>
        train_data, val_data, test_data = data_config.apply(args)
      File "/root/work/sentiment-discovery/configure_data.py", line 16, in apply
        return make_loaders(opt)
      File "/root/work/sentiment-discovery/configure_data.py", line 63, in make_loaders
        train = data_utils.make_dataset(**data_set_args)
      File "/root/work/sentiment-discovery/data_utils/__init__.py", line 133, in make_dataset
        binarize_sent=binarize_sent, delim=delim, drop_unlabeled=drop_unlabeled, loose=loose)
      File "/root/work/sentiment-discovery/data_utils/__init__.py", line 103, in handle_lazy
        binarize_sent=binarize_sent, delim=delim, drop_unlabeled=drop_unlabeled, ds=data_set)
      File "/root/work/sentiment-discovery/data_utils/__init__.py", line 54, in get_lazy
        make_lazy(processed_path, ds.X, data_type=data_shard)
      File "/root/work/sentiment-discovery/data_utils/lazy_loader.py", line 33, in make_lazy
        f.write(''.join(strs))
    MemoryError
    

    The problem is memory usage get's beyond 64 GB.

    Regards, Adrian

    opened by adryyandc 17
  • Minimal prediction code

    Minimal prediction code

    Not sure if this belongs here, but what are the minimal requirements needed to evaluate a pre-trained model, if I just need to extract text embeddings (to try out transfer learning tasks)?

    opened by rainjacket 12
  • Dataloader error using Amazon data

    Dataloader error using Amazon data

    I'm trying to train on Amazon data. However, after preprocessing and after the first iteration I get a dataloader error that 5 arguments expected and only 1 provided:

    screenshot from 2018-05-26 17-42-12

    I'm using torch 0.4.0 and a single GPU machine.

    opened by mkachuee 9
  • Classifier Error

    Classifier Error

    I trained a very simple language model: python main.py --nhid 64 --save 'lang_model_64.pt'

    Then I tried to classify: python classifier.py --load_model 'lang_model_64.pt' --nhid 64

    And this error happend:

    RuntimeError: Error(s) in loading state_dict for stackedRNN: Missing key(s) in state_dict: "rnns.0.w_mhh_v", "rnns.0.w_hh_g", "rnns.0.w_hh_v", "rnns.0.w_mhh_g", "rnns.0.w_mih_v", "rnns.0.w_ih_g", "rnns.0.w_ih_v", "rnns.0.w_mih_g". Unexpected key(s) in state_dict: "rnns.0.w_ih", "rnns.0.w_hh", "rnns.0.w_mih", "rnns.0.w_mhh".

    opened by zfallahnejad 8
  • text classification using pretrained models usage?

    text classification using pretrained models usage?

    I tried classifying text using both the Binary SST & Imdb pretrained models. But all 10,000 sentences/examples from my corpus were labeled -1.0 i.e negative sentiment ?

    python classifier.py --load_model ~/imdb_clf.pt --test ~/sample10k.csv

    My corpus looks like below

    $ head -n 4 ~/sample10k.csv sentence It was for Infinity cars driving with a family nice snooth ride the XQ 60 "I like the ad, but would like to see more interior shots. Seems to me you are describing interior roominess." I love the car The poem was really sweet. I really liked the car I love this ad because it seems to talk the real life things that can happen in a car with a family.

    Output

    $ head sample10k.sentence.label.csv label,sentence -1.0," It was for Infinity cars driving with a family nice snooth ride the XQ 60 " -1.0," I like the ad, but would like to see more interior shots. Seems to me you are describing interior roominess. " -1.0," I love the car " -1.0," The poem was really sweet. " -1.0," I really liked the car " -1.0," I love this ad because it seems to talk the real life things that can happen in a car with a family. "

    opened by harsham05 5
  • Transfer learning fails and cannot be restarted

    Transfer learning fails and cannot be restarted

    I have trained a model on my text corpus (full_model.pt) and want to see now how well it does with a labeled dataset. So I labeled the data and ran the following:

    python transfer.py --load_model full_model.pt --data ./labeled.csv --neurons 30 --epochs 5 --split 10,1,1
    configuring data
    generating csv at ./labeled.sentence.label.csv
    Creating mlstm
    writing results to full_model_transfer/sentiment
    transforming train
    batch     1/  162 | ch/s 8.56E+03 | time 7.25E+02 | time left 1.17E+05
    batch     2/  162 | ch/s 1.39E+04 | time 4.03E+02 | time left 9.02E+04
    batch     3/  162 | ch/s 1.33E+04 | time 5.10E+02 | time left 8.68E+04
    batch     4/  162 | ch/s 1.13E+04 | time 5.68E+02 | time left 8.71E+04
    batch     5/  162 | ch/s 1.29E+04 | time 5.46E+02 | time left 8.64E+04
    batch     6/  162 | ch/s 1.13E+04 | time 5.78E+02 | time left 8.66E+04
    batch     7/  162 | ch/s 1.33E+04 | time 4.90E+02 | time left 8.46E+04
    batch     8/  162 | ch/s 1.19E+04 | time 6.36E+02 | time left 8.58E+04
    batch     9/  162 | ch/s 1.27E+04 | time 5.48E+02 | time left 8.51E+04
    batch    10/  162 | ch/s 1.27E+04 | time 6.60E+02 | time left 8.61E+04
    batch    11/  162 | ch/s 1.40E+04 | time 5.55E+02 | time left 8.54E+04
    batch    12/  162 | ch/s 1.36E+04 | time 6.53E+02 | time left 8.59E+04
    batch    13/  162 | ch/s 1.11E+04 | time 7.29E+02 | time left 8.71E+04
    batch    14/  162 | ch/s 1.30E+04 | time 8.20E+02 | time left 8.90E+04
    batch    15/  162 | ch/s 1.51E+04 | time 7.54E+02 | time left 8.99E+04
    batch    16/  162 | ch/s 1.39E+04 | time 8.07E+02 | time left 9.11E+04
    batch    17/  162 | ch/s 1.11E+04 | time 1.10E+03 | time left 9.45E+04
    batch    18/  162 | ch/s 1.25E+04 | time 9.17E+02 | time left 9.60E+04
    batch    19/  162 | ch/s 1.25E+04 | time 9.85E+02 | time left 9.77E+04
    batch    20/  162 | ch/s 1.19E+04 | time 1.01E+03 | time left 9.94E+04
    batch    21/  162 | ch/s 1.28E+04 | time 1.04E+03 | time left 1.01E+05
    THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/generated/../THCReduceAll.cuh line=317 error=4 : unspecified launch failure
    Traceback (most recent call last):
      File "transfer.py", line 328, in <module>
        trXt, trY = transform(model, train_data)
      File "transfer.py", line 138, in transform
        cell = model(text_batch, length_batch, args.get_hidden)
      File "/home/imsm/.conda/envs/jupyterlab/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/imsm/Documents/daniel_tmp/sentimentNvidia/sentiment-discovery-master/model/model.py", line 93, in forward
        cell = get_valid_outs(i, seq_len, cell, last_cell)
      File "/home/imsm/Documents/daniel_tmp/sentimentNvidia/sentiment-discovery-master/model/model.py", line 130, in get_valid_outs
        if (invalid_steps.long().sum() == 0):
    RuntimeError: cuda runtime error (4) : unspecified launch failure at /opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/generated/../THCReduceAll.cuh:317
    

    When I try to restart the training it fails immediately with error:

    python transfer.py --load_model full_model.pt --data ./labeled.csv --neurons 30 --epochs 5 --split 10,1,1
    configuring data
    Creating mlstm
    Traceback (most recent call last):
      File "transfer.py", line 89, in <module>
        sd = x = torch.load(f)
      File "/home/imsm/.conda/envs/jupyterlab/lib/python3.6/site-packages/torch/serialization.py", line 358, in load
        return _load(f, map_location, pickle_module)
      File "/home/imsm/.conda/envs/jupyterlab/lib/python3.6/site-packages/torch/serialization.py", line 542, in _load
        result = unpickler.load()
      File "/home/imsm/.conda/envs/jupyterlab/lib/python3.6/site-packages/torch/serialization.py", line 508, in persistent_load
        data_type(size), location)
      File "/home/imsm/.conda/envs/jupyterlab/lib/python3.6/site-packages/torch/serialization.py", line 104, in default_restore_location
        result = fn(storage, location)
      File "/home/imsm/.conda/envs/jupyterlab/lib/python3.6/site-packages/torch/serialization.py", line 75, in _cuda_deserialize
        raise RuntimeError('Attempting to deserialize object on a CUDA '
    RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.
    

    Some more details:

    torch.version.cuda
    9.2.148'
    
    python --version
    Python 3.6.6
    
    lspci | grep VGA 
    04:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41)
    17:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)
    65:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)
    b3:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)
    
    nvidia-settings --version
    nvidia-settings:  version 396.37  ([email protected])  Tue Jun 12 14:49:22 PDT 2018
    
    uname -a
    Linux imsm-gpu2 4.15.0-33-generic #36-Ubuntu SMP Wed Aug 15 16:00:05 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
    
    

    Any ideas?

    opened by danielw2904 5
  • Error ModuleNotFoundError: No module named 'torch.nn._functions.rnn'

    Error ModuleNotFoundError: No module named 'torch.nn._functions.rnn'

    Hi I clone this project and run python3 setup.py install. everything is ok but when I run the script classifier.py --load_model lang_model_transfer/sentiment/sst_clf.pt --d ata data/icbu/icbu_negative_reviews.csv I get this error: Traceback (most recent call last): File "classifier.py", line 13, in <module> from apex.reparameterization import apply_weight_norm, remove_weight_norm File "/root/anaconda3/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/__init__.py", line 1, in <module> File "/root/anaconda3/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/RNN/__init__.py", line 1, in <module> File "/root/anaconda3/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/RNN/models.py", line 3, in <module> ModuleNotFoundError: No module named 'torch.nn._functions.rnn'

    opened by elixuy 5
  • Input file format

    Input file format

    Thank your for the awesome model. I would like to train it on a number of longer text documents and was wondering in what format I should pass the texts to the script. Can I just put them all in a single text file and pass that to main.py? Or would it be better to put them in a Json or csv file with one entry per file even though I do not have labels? Sorry, I am kind of confused since the model is unsupervised but the datasets still have labels.

    opened by danielw2904 5
  • UnboundLocalError: local variable 'cell' referenced before assignment

    UnboundLocalError: local variable 'cell' referenced before assignment

    I forked your project and changed it in order to test it for my language. I faced the following error during the run of transfer:

    transform:   0%|                                                           | 0/1 [00:00<?, ?batch/s]
    Traceback (most recent call last):
      File "transfer.py", line 403, in <module>
        main()
      File "transfer.py", line 247, in main
        trXt, trY = transform(model, train_data, args)
      File "transfer.py", line 130, in transform
        cell, _ = get_outs(text_batch, length_batch)
      File "transfer.py", line 116, in get_outs
        cell_out, lm_or_encoder_out = model(text_batch, length_batch, args.get_hidden)
      File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in __call__
        result = self.forward(*input, **kwargs)
      File "/content/model/model.py", line 156, in forward
        return cell, None
    UnboundLocalError: local variable 'cell' referenced before assignment
    

    Do you have any solution for this problem? I would appreciate it if you could help me figure this out.

    This is the forked project and This is a colab notebook which contains my test.

    opened by zfallahnejad 4
  • how do i use a  pre-trained model on CPU?

    how do i use a pre-trained model on CPU?

    I tried the following command.

    python3 generate.py --model mLSTM --load_model mlstm.pt --neuron 2388 --visualize
    
    Warning:  apex was installed without --cpp_ext.  Falling back to Python flatten and unflatten.
    Warning:  apex was installed without --cuda_ext. Fused syncbn kernels will be unavailable.  Python fallbacks will be used instead.
    Warning:  apex was installed without --cuda_ext.  FusedAdam will be unavailable.
    Warning:  apex was installed without --cuda_ext.  FusedLayerNorm will be unavailable.
    /home/debanjan/miniconda3/envs/dsenv/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
      return f(*args, **kwds)
    /home/debanjan/miniconda3/envs/dsenv/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
      return f(*args, **kwds)
    Traceback (most recent call last):
      File "generate.py", line 90, in <module>
        sd = torch.load(f)
      File "/home/debanjan/miniconda3/envs/dsenv/lib/python3.6/site-packages/torch/serialization.py", line 358, in load
        return _load(f, map_location, pickle_module)
      File "/home/debanjan/miniconda3/envs/dsenv/lib/python3.6/site-packages/torch/serialization.py", line 542, in _load
        result = unpickler.load()
      File "/home/debanjan/miniconda3/envs/dsenv/lib/python3.6/site-packages/torch/serialization.py", line 508, in persistent_load
        data_type(size), location)
      File "/home/debanjan/miniconda3/envs/dsenv/lib/python3.6/site-packages/torch/serialization.py", line 104, in default_restore_location
        result = fn(storage, location)
      File "/home/debanjan/miniconda3/envs/dsenv/lib/python3.6/site-packages/torch/serialization.py", line 75, in _cuda_deserialize
        raise RuntimeError('Attempting to deserialize object on a CUDA '
    RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.
    

    I also tried using model.cpu() when torch.cuda.is_available() is False. I also tried using load with map_location='cpu' ... which led to inconsistencies in tensor/ndarray sizes.

    PS: I didn't find a --cpu option in the docs. Others have discussed running a model on the CPU - but I didn't find anything else. PPS: I am using pytorch-cpu version 0.4.1=py36_cpu_1 from the conda pytorch channel.

    opened by d3banjan 4
  • Visualisation issues?

    Visualisation issues?

    Hi,

    I am trying to clone your heatmap function but I am failing , Can you please point out where I am wrong?

    https://github.com/yashkumaratri/testrepo/blob/master/heatmap.ipynb

    it'll be a great help.(Running on openai model)

    opened by yashkumaratri 3
  • AttributeError: 'DataLoader' object has no attribute 'device'

    AttributeError: 'DataLoader' object has no attribute 'device'

    Here is a link to my notebook https://www.kaggle.com/sarthaksshukla/cnn-with-pytorch. When I try to run learner.lr_find(), learner being the FastAI's Learner, it shows the error shown above. Please help. I am running this notebook on Kaggle, therefore I do not know the versions of the PyTorch or FastAI that are currently being used.

    opened by Zephyr-stack 0
  • The emotion classification model's performance is almost the same as a random guess

    The emotion classification model's performance is almost the same as a random guess

    Hi, I repeat the emotion classification experiment and get terrible results. I couldn't what is the issue.

    1. The experiment is repeated using the command line "!python3 experiments/run_clf_multihead.py --text-key Tweet --train data/semeval/train.csv --val data/semeval/val.csv --process-fn process_tweet".

    2. Then, I got a series of classifiers in transformer_multihead from the 1)step.

    3. Then I used "!python3 run_classifier.py --load transformer_multihead/model_ep0.clf --text-key Tweet --data data/semeval/val.csv --model transformer --write-results results/semeval/val_result.csv" on the validation set.

    4. The performance is evaulated with respect to balanced accuracy, f1 score and ROC using metrics module from sklearn package. The results are shown as follows.

                           anger	anticipation	disgust	fear	joy	sadness	surprise	trust
      

    balanced accuracy 0.500876 0.500000 0.537070 0.500000 0.500000 0.500000 0.499412 0.500593 f1_score 0.525000 0.245545 0.488992 0.240318 0.622084 0.460469 0.000000 0.092672 ROC 0.537700 0.450639 0.549253 0.474326 0.508107 0.481694 0.504079 0.500841

    Is anything I can do to make it work?

    Regards, Yipeng

    opened by YipengUva 4
  • error: command 'gcc' failed with exit status 1

    error: command 'gcc' failed with exit status 1

    Hi everyone, I followed the instruction of this suggestion in #723 , because i couldn't use apex from another directory. So I tried to use python setup.py install --cuda_ext --cpp_ext and I got these errors

    ` (base) [[email protected] apex]$ python3 setup.py install --cuda_ext --cpp_ext torch.version = 1.2.0 setup.py:46: UserWarning: Option --pyprof not specified. Not installing PyProf dependencies! warnings.warn("Option --pyprof not specified. Not installing PyProf dependencies!")

    Compiling cuda extensions with nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2018 NVIDIA Corporation Built on Sat_Aug_25_21:08:01_CDT_2018 Cuda compilation tools, release 10.0, V10.0.130 from /usr/local/cuda/bin

    running install running bdist_egg running egg_info writing apex.egg-info/PKG-INFO writing dependency_links to apex.egg-info/dependency_links.txt writing top-level names to apex.egg-info/top_level.txt reading manifest file 'apex.egg-info/SOURCES.txt' writing manifest file 'apex.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py creating build creating build/lib.linux-x86_64-3.7 creating build/lib.linux-x86_64-3.7/apex copying apex/init.py -> build/lib.linux-x86_64-3.7/apex creating build/lib.linux-x86_64-3.7/apex/RNN copying apex/RNN/RNNBackend.py -> build/lib.linux-x86_64-3.7/apex/RNN copying apex/RNN/init.py -> build/lib.linux-x86_64-3.7/apex/RNN copying apex/RNN/cells.py -> build/lib.linux-x86_64-3.7/apex/RNN copying apex/RNN/models.py -> build/lib.linux-x86_64-3.7/apex/RNN creating build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/init.py -> build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/version.py -> build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/amp_state.py -> build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/initialize.py -> build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/process_optimizer.py -> build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/amp.py -> build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/compat.py -> build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/frontend.py -> build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/handle.py -> build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/opt.py -> build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/rnn_compat.py -> build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/scaler.py -> build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/utils.py -> build/lib.linux-x86_64-3.7/apex/amp copying apex/amp/wrap.py -> build/lib.linux-x86_64-3.7/apex/amp creating build/lib.linux-x86_64-3.7/apex/contrib copying apex/contrib/init.py -> build/lib.linux-x86_64-3.7/apex/contrib creating build/lib.linux-x86_64-3.7/apex/fp16_utils copying apex/fp16_utils/init.py -> build/lib.linux-x86_64-3.7/apex/fp16_utils copying apex/fp16_utils/fp16_optimizer.py -> build/lib.linux-x86_64-3.7/apex/fp16_utils copying apex/fp16_utils/fp16util.py -> build/lib.linux-x86_64-3.7/apex/fp16_utils copying apex/fp16_utils/loss_scaler.py -> build/lib.linux-x86_64-3.7/apex/fp16_utils creating build/lib.linux-x86_64-3.7/apex/multi_tensor_apply copying apex/multi_tensor_apply/init.py -> build/lib.linux-x86_64-3.7/apex/multi_tensor_apply copying apex/multi_tensor_apply/multi_tensor_apply.py -> build/lib.linux-x86_64-3.7/apex/multi_tensor_apply creating build/lib.linux-x86_64-3.7/apex/normalization copying apex/normalization/init.py -> build/lib.linux-x86_64-3.7/apex/normalization copying apex/normalization/fused_layer_norm.py -> build/lib.linux-x86_64-3.7/apex/normalization creating build/lib.linux-x86_64-3.7/apex/optimizers copying apex/optimizers/init.py -> build/lib.linux-x86_64-3.7/apex/optimizers copying apex/optimizers/fused_adam.py -> build/lib.linux-x86_64-3.7/apex/optimizers copying apex/optimizers/fused_lamb.py -> build/lib.linux-x86_64-3.7/apex/optimizers copying apex/optimizers/fused_novograd.py -> build/lib.linux-x86_64-3.7/apex/optimizers copying apex/optimizers/fused_sgd.py -> build/lib.linux-x86_64-3.7/apex/optimizers creating build/lib.linux-x86_64-3.7/apex/parallel copying apex/parallel/LARC.py -> build/lib.linux-x86_64-3.7/apex/parallel copying apex/parallel/init.py -> build/lib.linux-x86_64-3.7/apex/parallel copying apex/parallel/distributed.py -> build/lib.linux-x86_64-3.7/apex/parallel copying apex/parallel/multiproc.py -> build/lib.linux-x86_64-3.7/apex/parallel copying apex/parallel/optimized_sync_batchnorm.py -> build/lib.linux-x86_64-3.7/apex/parallel copying apex/parallel/optimized_sync_batchnorm_kernel.py -> build/lib.linux-x86_64-3.7/apex/parallel copying apex/parallel/sync_batchnorm.py -> build/lib.linux-x86_64-3.7/apex/parallel copying apex/parallel/sync_batchnorm_kernel.py -> build/lib.linux-x86_64-3.7/apex/parallel creating build/lib.linux-x86_64-3.7/apex/pyprof copying apex/pyprof/init.py -> build/lib.linux-x86_64-3.7/apex/pyprof creating build/lib.linux-x86_64-3.7/apex/reparameterization copying apex/reparameterization/init.py -> build/lib.linux-x86_64-3.7/apex/reparameterization copying apex/reparameterization/reparameterization.py -> build/lib.linux-x86_64-3.7/apex/reparameterization copying apex/reparameterization/weight_norm.py -> build/lib.linux-x86_64-3.7/apex/reparameterization creating build/lib.linux-x86_64-3.7/apex/amp/lists copying apex/amp/lists/init.py -> build/lib.linux-x86_64-3.7/apex/amp/lists copying apex/amp/lists/functional_overrides.py -> build/lib.linux-x86_64-3.7/apex/amp/lists copying apex/amp/lists/tensor_overrides.py -> build/lib.linux-x86_64-3.7/apex/amp/lists copying apex/amp/lists/torch_overrides.py -> build/lib.linux-x86_64-3.7/apex/amp/lists creating build/lib.linux-x86_64-3.7/apex/contrib/groupbn copying apex/contrib/groupbn/init.py -> build/lib.linux-x86_64-3.7/apex/contrib/groupbn copying apex/contrib/groupbn/batch_norm.py -> build/lib.linux-x86_64-3.7/apex/contrib/groupbn creating build/lib.linux-x86_64-3.7/apex/contrib/multihead_attn copying apex/contrib/multihead_attn/init.py -> build/lib.linux-x86_64-3.7/apex/contrib/multihead_attn copying apex/contrib/multihead_attn/encdec_multihead_attn.py -> build/lib.linux-x86_64-3.7/apex/contrib/multihead_attn copying apex/contrib/multihead_attn/encdec_multihead_attn_func.py -> build/lib.linux-x86_64-3.7/apex/contrib/multihead_attn copying apex/contrib/multihead_attn/fast_encdec_multihead_attn_func.py -> build/lib.linux-x86_64-3.7/apex/contrib/multihead_attn copying apex/contrib/multihead_attn/fast_encdec_multihead_attn_norm_add_func.py -> build/lib.linux-x86_64-3.7/apex/contrib/multihead_attn copying apex/contrib/multihead_attn/fast_self_multihead_attn_func.py -> build/lib.linux-x86_64-3.7/apex/contrib/multihead_attn copying apex/contrib/multihead_attn/fast_self_multihead_attn_norm_add_func.py -> build/lib.linux-x86_64-3.7/apex/contrib/multihead_attn copying apex/contrib/multihead_attn/self_multihead_attn.py -> build/lib.linux-x86_64-3.7/apex/contrib/multihead_attn copying apex/contrib/multihead_attn/self_multihead_attn_func.py -> build/lib.linux-x86_64-3.7/apex/contrib/multihead_attn creating build/lib.linux-x86_64-3.7/apex/contrib/optimizers copying apex/contrib/optimizers/init.py -> build/lib.linux-x86_64-3.7/apex/contrib/optimizers copying apex/contrib/optimizers/fp16_optimizer.py -> build/lib.linux-x86_64-3.7/apex/contrib/optimizers copying apex/contrib/optimizers/fused_adam.py -> build/lib.linux-x86_64-3.7/apex/contrib/optimizers copying apex/contrib/optimizers/fused_sgd.py -> build/lib.linux-x86_64-3.7/apex/contrib/optimizers creating build/lib.linux-x86_64-3.7/apex/contrib/xentropy copying apex/contrib/xentropy/init.py -> build/lib.linux-x86_64-3.7/apex/contrib/xentropy copying apex/contrib/xentropy/softmax_xentropy.py -> build/lib.linux-x86_64-3.7/apex/contrib/xentropy creating build/lib.linux-x86_64-3.7/apex/pyprof/nvtx copying apex/pyprof/nvtx/init.py -> build/lib.linux-x86_64-3.7/apex/pyprof/nvtx copying apex/pyprof/nvtx/nvmarker.py -> build/lib.linux-x86_64-3.7/apex/pyprof/nvtx creating build/lib.linux-x86_64-3.7/apex/pyprof/parse copying apex/pyprof/parse/init.py -> build/lib.linux-x86_64-3.7/apex/pyprof/parse copying apex/pyprof/parse/main.py -> build/lib.linux-x86_64-3.7/apex/pyprof/parse copying apex/pyprof/parse/db.py -> build/lib.linux-x86_64-3.7/apex/pyprof/parse copying apex/pyprof/parse/kernel.py -> build/lib.linux-x86_64-3.7/apex/pyprof/parse copying apex/pyprof/parse/nvvp.py -> build/lib.linux-x86_64-3.7/apex/pyprof/parse copying apex/pyprof/parse/parse.py -> build/lib.linux-x86_64-3.7/apex/pyprof/parse creating build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/init.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/main.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/activation.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/base.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/blas.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/conv.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/convert.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/data.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/dropout.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/embedding.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/index_slice_join_mutate.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/linear.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/loss.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/misc.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/normalization.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/optim.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/output.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/pointwise.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/pooling.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/prof.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/randomSample.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/recurrentCell.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/reduction.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/softmax.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/usage.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof copying apex/pyprof/prof/utility.py -> build/lib.linux-x86_64-3.7/apex/pyprof/prof running build_ext building 'apex_C' extension creating build/temp.linux-x86_64-3.7 creating build/temp.linux-x86_64-3.7/csrc gcc -pthread -B /home/narimene/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/narimene/anaconda3/lib/python3.7/site-packages/torch/include -I/home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/home/narimene/anaconda3/include/python3.7m -c csrc/flatten_unflatten.cpp -o build/temp.linux-x86_64-3.7/csrc/flatten_unflatten.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=apex_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++ In file included from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/core/Backend.h:5:0, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/core/Layout.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/Tensor.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:2, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from csrc/flatten_unflatten.cpp:1: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/core/TensorTypeIdRegistration.h:50:16: error: 'mutex' in namespace 'std' does not name a type mutable std::mutex mutex; ^ In file included from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/core/ScalarType.h:6:0, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/core/Scalar.h:9, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/Tensor.h:7, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:2, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from csrc/flatten_unflatten.cpp:1: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:596:59: error: 'mutex' is not a member of 'std' CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE(17, std::unique_ptrstd::mutex) ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:552:56: note: in definition of macro 'CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE' inline C10_EXPORT TypeIdentifier TypeIdentifier::Get() {
    ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:596:59: error: 'mutex' is not a member of 'std' CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE(17, std::unique_ptrstd::mutex) ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:552:56: note: in definition of macro 'CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE' inline C10_EXPORT TypeIdentifier TypeIdentifier::Get() {
    ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:596:69: error: template argument 1 is invalid CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE(17, std::unique_ptrstd::mutex) ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:552:56: note: in definition of macro 'CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE' inline C10_EXPORT TypeIdentifier TypeIdentifier::Get() {
    ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:596:69: error: template argument 2 is invalid CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE(17, std::unique_ptrstd::mutex) ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:552:56: note: in definition of macro 'CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE' inline C10_EXPORT TypeIdentifier TypeIdentifier::Get() {
    ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:552:36: error: template-id 'Get< >' for 'caffe2::TypeIdentifier caffe2::TypeIdentifier::Get()' does not match any template declaration inline C10_EXPORT TypeIdentifier TypeIdentifier::Get() {
    ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:596:1: note: in expansion of macro 'CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE' CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE(17, std::unique_ptrstd::mutex) ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:596:59: error: 'mutex' is not a member of 'std' CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE(17, std::unique_ptrstd::mutex) ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:562:35: note: in definition of macro 'CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE' TypeMeta::typeMetaDataInstance() noexcept {
    ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:596:59: error: 'mutex' is not a member of 'std' CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE(17, std::unique_ptrstd::mutex) ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:562:35: note: in definition of macro 'CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE' TypeMeta::typeMetaDataInstance() noexcept {
    ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:596:69: error: template argument 1 is invalid CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE(17, std::unique_ptrstd::mutex) ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:562:35: note: in definition of macro 'CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE' TypeMeta::typeMetaDataInstance() noexcept {
    ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:596:69: error: template argument 2 is invalid CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE(17, std::unique_ptrstd::mutex) ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:562:35: note: in definition of macro 'CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE' TypeMeta::typeMetaDataInstance() noexcept {
    ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:562:3: error: template-id 'typeMetaDataInstance< >' for 'const caffe2::detail::TypeMetaData* caffe2::TypeMeta::typeMetaDataInstance()' does not match any template declaration TypeMeta::typeMetaDataInstance() noexcept {
    ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/typeid.h:596:1: note: in expansion of macro 'CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE' CAFFE_DECLARE_PREALLOCATED_KNOWN_TYPE(17, std::unique_ptrstd::mutex) ^ In file included from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/Flags.h:36:0, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/core/TensorImpl.h:17, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/Tensor.h:11, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:2, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from csrc/flatten_unflatten.cpp:1: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/Registry.h:157:8: error: 'mutex' in namespace 'std' does not name a type std::mutex register_mutex
    ; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/Registry.h: In member function 'void c10::Registry<SrcType, ObjectPtrType, Args>::Register(const SrcType&, c10::Registry<SrcType, ObjectPtrType, Args>::Creator, c10::RegistryPriority)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/Registry.h:64:21: error: 'mutex' is not a member of 'std' std::lock_guardstd::mutex lock(register_mutex
    ); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/Registry.h:64:21: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/Registry.h:64:31: error: template argument 1 is invalid std::lock_guardstd::mutex lock(register_mutex
    ); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/Registry.h:64:37: error: invalid type in declaration before '(' token std::lock_guardstd::mutex lock(register_mutex); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/c10/util/Registry.h:64:38: error: 'register_mutex' was not declared in this scope std::lock_guardstd::mutex lock(register_mutex); ^ In file included from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/Tensor.h:16:0, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:2, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from csrc/flatten_unflatten.cpp:1: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h: In member function 'void at::LegacyTypeDispatch::initForBackend(c10::Backend)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h:23:17: error: 'once_flag' in namespace 'std' does not name a type static std::once_flag cpu_once; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h:24:17: error: 'once_flag' in namespace 'std' does not name a type static std::once_flag cuda_once; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h:26:7: error: 'call_once' is not a member of 'std' std::call_once(cpu_once, [] { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h:26:22: error: 'cpu_once' was not declared in this scope std::call_once(cpu_once, [] { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h:30:7: error: 'call_once' is not a member of 'std' std::call_once(cuda_once, [] { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h:30:22: error: 'cuda_once' was not declared in this scope std::call_once(cuda_once, [] { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h:34:7: error: 'call_once' is not a member of 'std' std::call_once(cuda_once, [] { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h:34:22: error: 'cuda_once' was not declared in this scope std::call_once(cuda_once, [] { ^ In file included from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/DeprecatedTypeProperties.h:9:0, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorMethods.h:9, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/Tensor.h:815, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:2, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from csrc/flatten_unflatten.cpp:1: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/Generator.h: At global scope: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/Generator.h:75:8: error: 'mutex' in namespace 'std' does not name a type std::mutex mutex; ^ In file included from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorMethods.h:10:0, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/Tensor.h:815, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:2, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from csrc/flatten_unflatten.cpp:1: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ATenDispatch.h:104:8: error: 'mutex' in namespace 'std' does not name a type std::mutex mutex; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ATenDispatch.h: In member function 'at::ATenDispatch& at::ATenDispatch::registerOp(c10::Backend, const char*, FuncType*)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ATenDispatch.h:77:21: error: 'mutex' is not a member of 'std' std::lock_guardstd::mutex lock(mutex); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ATenDispatch.h:77:21: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ATenDispatch.h:77:31: error: template argument 1 is invalid std::lock_guardstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ATenDispatch.h:77:37: error: invalid type in declaration before '(' token std::lock_guardstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ATenDispatch.h:77:38: error: 'mutex_' was not declared in this scope std::lock_guardstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ATenDispatch.h: In member function 'at::ATenDispatch& at::ATenDispatch::registerVariableOp(const char*, FuncType*)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ATenDispatch.h:87:21: error: 'mutex' is not a member of 'std' std::lock_guardstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ATenDispatch.h:87:21: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ATenDispatch.h:87:31: error: template argument 1 is invalid std::lock_guardstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ATenDispatch.h:87:37: error: invalid type in declaration before '(' token std::lock_guardstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ATenDispatch.h:87:38: error: 'mutex_' was not declared in this scope std::lock_guardstd::mutex lock(mutex_); ^ In file included from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5:0, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from csrc/flatten_unflatten.cpp:1: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h: At global scope: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:114:8: error: 'once_flag' in namespace 'std' does not name a type std::once_flag thc_init; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:115:8: error: 'once_flag' in namespace 'std' does not name a type std::once_flag thh_init; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h: In member function 'THCState* at::Context::lazyInitCUDA()': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:69:5: error: 'call_once' is not a member of 'std' std::call_once(thc_init,[&] { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:69:20: error: 'thc_init' was not declared in this scope std::call_once(thc_init,[&] { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h: In member function 'THHState* at::Context::lazyInitHIP()': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:75:5: error: 'call_once' is not a member of 'std' std::call_once(thh_init,[&] { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:75:20: error: 'thh_init' was not declared in this scope std::call_once(thh_init,[&] { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h: In function 'void at::manual_seed(uint64_t)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:207:21: error: 'mutex' is not a member of 'std' std::lock_guardstd::mutex lock(gen.mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:207:21: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:207:31: error: template argument 1 is invalid std::lock_guardstd::mutex lock(gen.mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:207:37: error: invalid type in declaration before '(' token std::lock_guardstd::mutex lock(gen.mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:207:42: error: 'struct at::Generator' has no member named 'mutex_' std::lock_guardstd::mutex lock(gen.mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:207:33: warning: unused variable 'lock' [-Wunused-variable] std::lock_guardstd::mutex lock(gen.mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:218:25: error: 'mutex' is not a member of 'std' std::lock_guardstd::mutex lock(cuda_gen.mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:218:25: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:218:35: error: template argument 1 is invalid std::lock_guardstd::mutex lock(cuda_gen.mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:218:41: error: invalid type in declaration before '(' token std::lock_guardstd::mutex lock(cuda_gen.mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:218:51: error: 'struct at::Generator' has no member named 'mutex_' std::lock_guardstd::mutex lock(cuda_gen.mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:218:37: warning: unused variable 'lock' [-Wunused-variable] std::lock_guardstd::mutex lock(cuda_gen.mutex_); ^ In file included from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/autograd/generated/variable_factories.h:8:0, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:7, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from csrc/flatten_unflatten.cpp:1: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/autograd/variable.h: At global scope: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/autograd/variable.h:356:8: error: 'mutex' in namespace 'std' does not name a type std::mutex mutex_; ^ In file included from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue.h:569:0, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/stack.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/jit/tracer.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/autograd/generated/variable_factories.h:9, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:7, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from csrc/flatten_unflatten.cpp:1: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:260:8: error: 'mutex' in namespace 'std' does not name a type std::mutex mutex_; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:262:8: error: 'condition_variable' in namespace 'std' does not name a type std::condition_variable finished_cv_; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h: In member function 'void c10::ivalue::Future::wait()': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:180:22: error: 'mutex' is not a member of 'std' std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:180:22: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:180:32: error: template argument 1 is invalid std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:180:38: error: invalid type in declaration before '(' token std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:180:39: error: 'mutex_' was not declared in this scope std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:182:7: error: 'finished_cv_' was not declared in this scope finished_cv_.wait(lock); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h: In member function 'void c10::ivalue::Future::markCompleted(c10::IValue)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:190:22: error: 'mutex' is not a member of 'std' std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:190:22: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:190:32: error: template argument 1 is invalid std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:190:38: error: invalid type in declaration before '(' token std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:190:39: error: 'mutex_' was not declared in this scope std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:196:5: error: 'finished_cv_' was not declared in this scope finished_cv_.notify_all(); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:190:34: warning: unused variable 'lock' [-Wunused-variable] std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h: In member function 'void c10::ivalue::Future::markCompleted(c10::ivalue::Future::FutureError&&)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:204:22: error: 'mutex' is not a member of 'std' std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:204:22: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:204:32: error: template argument 1 is invalid std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:204:38: error: invalid type in declaration before '(' token std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:204:39: error: 'mutex_' was not declared in this scope std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:211:5: error: 'finished_cv_' was not declared in this scope finished_cv_.notify_all(); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:204:34: warning: unused variable 'lock' [-Wunused-variable] std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h: In member function 'c10::IValue c10::ivalue::Future::value()': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:216:22: error: 'mutex' is not a member of 'std' std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:216:22: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:216:32: error: template argument 1 is invalid std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:216:38: error: invalid type in declaration before '(' token std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:216:39: error: 'mutex_' was not declared in this scope std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:216:34: warning: unused variable 'lock' [-Wunused-variable] std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h: In member function 'void c10::ivalue::Future::addCallback(std::function<void()>)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:231:22: error: 'mutex' is not a member of 'std' std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:231:22: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:231:32: error: template argument 1 is invalid std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:231:38: error: invalid type in declaration before '(' token std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:231:39: error: 'mutex_' was not declared in this scope std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h:233:12: error: request for member 'unlock' in 'lock', which is of non-class type 'int' lock.unlock(); ^ In file included from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/data_shuttle.h:3:0, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from csrc/flatten_unflatten.cpp:1: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h: At global scope: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:79:8: error: 'mutex' in namespace 'std' does not name a type std::mutex mutex_; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:80:8: error: 'condition_variable' in namespace 'std' does not name a type std::condition_variable cv_; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h: In member function 'void torch::data::detail::Queue::push(T)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:33:23: error: 'mutex' is not a member of 'std' std::lock_guardstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:33:23: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:33:33: error: template argument 1 is invalid std::lock_guardstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:33:39: error: invalid type in declaration before '(' token std::lock_guardstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:33:40: error: 'mutex_' was not declared in this scope std::lock_guardstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:36:5: error: 'cv_' was not declared in this scope cv_.notify_one(); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h: In member function 'T torch::data::detail::Queue::pop(c10::optional<std::chrono::duration<long int, std::ratio<1l, 1000l> > >)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:44:22: error: 'mutex' is not a member of 'std' std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:44:22: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:44:32: error: template argument 1 is invalid std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:44:38: error: invalid type in declaration before '(' token std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:44:39: error: 'mutex_' was not declared in this scope std::unique_lockstd::mutex lock(mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:46:12: error: 'cv_' was not declared in this scope if (!cv_.wait_for( ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:55:7: error: 'cv_' was not declared in this scope cv_.wait(lock, [this] { return !this->queue_.empty(); }); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:60:10: error: request for member 'unlock' in 'lock', which is of non-class type 'int' lock.unlock(); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h: In member function 'size_t torch::data::detail::Queue::clear()': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:69:21: error: 'mutex' is not a member of 'std' std::lock_guardstd::mutex lock(this->mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:69:21: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:69:31: error: template argument 1 is invalid std::lock_guardstd::mutex lock(this->mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/detail/queue.h:69:37: error: invalid type in declaration before '(' token std::lock_guardstd::mutex lock(this->mutex_); ^ In file included from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3:0, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from csrc/flatten_unflatten.cpp:1: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h: At global scope: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:234:15: error: 'thread' is not a member of 'std' std::vectorstd::thread workers_; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:234:15: error: 'thread' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:234:26: error: template argument 1 is invalid std::vectorstd::thread workers_; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:234:26: error: template argument 2 is invalid /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h: In member function 'void torch::data::DataLoaderBase<Dataset, Batch, BatchRequest>::join()': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:88:25: error: there are no arguments to 'begin' that depend on a template parameter, so a declaration of 'begin' must be available [-fpermissive] for (auto& worker : workers_) { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:88:25: note: (if you use '-fpermissive', G++ will accept your code, but allowing the use of an undeclared name is deprecated) /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:88:25: error: there are no arguments to 'end' that depend on a template parameter, so a declaration of 'end' must be available [-fpermissive] In file included from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/jit/script/compilation_unit.h:3:0, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/jit/script/module.h:14, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:7, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:6, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from csrc/flatten_unflatten.cpp:1: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/jit/function.h: At global scope: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/jit/function.h:127:8: error: 'once_flag' in namespace 'std' does not name a type std::once_flag executor_init_; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/jit/function.h: In member function 'torch::jit::GraphExecutor& torch::jit::Function::get_executor()': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/jit/function.h:97:5: error: 'call_once' is not a member of 'std' std::call_once(executor_init_, [&] { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/jit/function.h:97:20: error: 'executor_init_' was not declared in this scope std::call_once(executor_init_, [&] { ^ In file included from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4:0, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from csrc/flatten_unflatten.cpp:1: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h: At global scope: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:224:8: error: 'mutex' in namespace 'std' does not name a type std::mutex queue_mutex_; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:226:8: error: 'condition_variable' in namespace 'std' does not name a type std::condition_variable cv_read_; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:227:8: error: 'condition_variable' in namespace 'std' does not name a type std::condition_variable cv_write_; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h: In member function 'torch::data::datasets::detail::BatchDataBuffer<UnwrappedBatch, ExampleSampler>::BatchType torch::data::datasets::detail::BatchDataBuffer<UnwrappedBatch, ExampleSampler>::get_batch()': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:64:22: error: 'mutex' is not a member of 'std' std::unique_lockstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:64:22: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:64:32: error: template argument 1 is invalid std::unique_lockstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:64:38: error: invalid type in declaration before '(' token std::unique_lockstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:64:39: error: 'queue_mutex_' was not declared in this scope std::unique_lockstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:65:5: error: 'cv_read_' was not declared in this scope cv_read_.wait(lock, [this] { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:85:10: error: request for member 'unlock' in 'lock', which is of non-class type 'int' lock.unlock(); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:86:5: error: 'cv_write_' was not declared in this scope cv_write_.notify_all(); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h: In member function 'void torch::data::datasets::detail::BatchDataBuffer<UnwrappedBatch, ExampleSampler>::add_chunk_data(torch::data::datasets::detail::BatchDataBuffer<UnwrappedBatch, ExampleSampler>::UnwrappedBatchType)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:94:22: error: 'mutex' is not a member of 'std' std::unique_lockstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:94:22: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:94:32: error: template argument 1 is invalid std::unique_lockstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:94:38: error: invalid type in declaration before '(' token std::unique_lockstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:94:39: error: 'queue_mutex_' was not declared in this scope std::unique_lockstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:95:5: error: 'cv_write_' was not declared in this scope cv_write_.wait(lock, [this] { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:148:10: error: request for member 'unlock' in 'lock', which is of non-class type 'int' lock.unlock(); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:149:5: error: 'cv_read_' was not declared in this scope cv_read_.notify_all(); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h: In member function 'void torch::data::datasets::detail::BatchDataBuffer<UnwrappedBatch, ExampleSampler>::add_chunk_data(std::exception_ptr::exception_ptr)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:155:22: error: 'mutex' is not a member of 'std' std::unique_lockstd::mutex lock(queue_mutex); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:155:22: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:155:32: error: template argument 1 is invalid std::unique_lockstd::mutex lock(queue_mutex); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:155:38: error: invalid type in declaration before '(' token std::unique_lockstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:155:39: error: 'queue_mutex_' was not declared in this scope std::unique_lockstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:156:5: error: 'cv_write_' was not declared in this scope cv_write_.wait(lock, [this] { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:170:10: error: request for member 'unlock' in 'lock', which is of non-class type 'int' lock.unlock(); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:171:5: error: 'cv_read_' was not declared in this scope cv_read_.notify_all(); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h: In member function 'void torch::data::datasets::detail::BatchDataBuffer<UnwrappedBatch, ExampleSampler>::stop()': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:187:23: error: 'mutex' is not a member of 'std' std::lock_guardstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:187:23: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:187:33: error: template argument 1 is invalid std::lock_guardstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:187:39: error: invalid type in declaration before '(' token std::lock_guardstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:187:40: error: 'queue_mutex_' was not declared in this scope std::lock_guardstd::mutex lock(queue_mutex_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:192:5: error: 'cv_write_' was not declared in this scope cv_write_.notify_all(); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:194:5: error: 'cv_read_' was not declared in this scope cv_read_.notify_all(); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h: At global scope: /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:487:15: error: 'thread' is not a member of 'std' std::vectorstd::thread preload_threads_; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:487:15: error: 'thread' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:487:26: error: template argument 1 is invalid std::vectorstd::thread preload_threads_; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:487:26: error: template argument 2 is invalid /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:512:16: error: 'mutex' in namespace 'std' does not name a type mutable std::mutex chunk_index_guard_; ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h: In member function 'void torch::data::datasets::ChunkDataset<ChunkReader, ChunkSampler, ExampleSampler>::reset()': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:374:22: error: request for member 'clear' in '((torch::data::datasets::ChunkDataset<ChunkReader, ChunkSampler, ExampleSampler>)this)->torch::data::datasets::ChunkDataset<ChunkReader, ChunkSampler, ExampleSampler>::preload_threads_', which is of non-class type 'int' preload_threads_.clear(); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:396:24: error: request for member 'emplace_back' in '((torch::data::datasets::ChunkDataset<ChunkReader, ChunkSampler, ExampleSampler>)this)->torch::data::datasets::ChunkDataset<ChunkReader, ChunkSampler, ExampleSampler>::preload_threads_', which is of non-class type 'int' preload_threads_.emplace_back(this, i { this->preloader(i); }); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h: In member function 'void torch::data::datasets::ChunkDataset<ChunkReader, ChunkSampler, ExampleSampler>::save(torch::serialize::OutputArchive&) const': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:412:21: error: 'mutex' is not a member of 'std' std::lock_guardstd::mutex lock(chunk_index_guard_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:412:21: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:412:31: error: template argument 1 is invalid std::lock_guardstd::mutex lock(chunk_index_guard_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:412:37: error: invalid type in declaration before '(' token std::lock_guardstd::mutex lock(chunk_index_guard_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:412:38: error: 'chunk_index_guard_' was not declared in this scope std::lock_guardstd::mutex lock(chunk_index_guard_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h: In member function 'void torch::data::datasets::ChunkDataset<ChunkReader, ChunkSampler, ExampleSampler>::load(torch::serialize::InputArchive&)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:417:21: error: 'mutex' is not a member of 'std' std::lock_guardstd::mutex lock(chunk_index_guard_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:417:21: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:417:31: error: template argument 1 is invalid std::lock_guardstd::mutex lock(chunk_index_guard_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:417:37: error: invalid type in declaration before '(' token std::lock_guardstd::mutex lock(chunk_index_guard_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:417:38: error: 'chunk_index_guard_' was not declared in this scope std::lock_guardstd::mutex lock(chunk_index_guard_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h: In member function 'void torch::data::datasets::ChunkDataset<ChunkReader, ChunkSampler, ExampleSampler>::preloader(size_t)': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:429:27: error: 'mutex' is not a member of 'std' std::lock_guardstd::mutex lock(chunk_index_guard_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:429:27: error: 'mutex' is not a member of 'std' /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:429:37: error: template argument 1 is invalid std::lock_guardstd::mutex lock(chunk_index_guard_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:429:43: error: invalid type in declaration before '(' token std::lock_guardstd::mutex lock(chunk_index_guard_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:429:44: error: 'chunk_index_guard_' was not declared in this scope std::lock_guardstd::mutex lock(chunk_index_guard_); ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h: In member function 'void torch::data::datasets::ChunkDataset<ChunkReader, ChunkSampler, ExampleSampler>::free_workers()': /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:464:34: error: there are no arguments to 'begin' that depend on a template parameter, so a declaration of 'begin' must be available [-fpermissive] for (auto& worker_thread : preload_threads_) { ^ /home/narimene/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:464:34: error: there are no arguments to 'end' that depend on a template parameter, so a declaration of 'end' must be available [-fpermissive] error: command 'gcc' failed with exit status 1 `

    I dont know how to fix this problem. Thank you for your help in advance

    opened by sab148 0
  • run_classifier.py invalid arguments

    run_classifier.py invalid arguments

    I downloaded the sst_clf_16.pt file to perform classification from a csv file. Thw code I wrote:

    python run_classifier.py --load_model sst_clf_16.pt --fp16 --data upload_processed_tweet.csv --text-key text --write-results output.csv

    The error I got was that the --load_model is an "unrecognized argument":

    run_classifier.py: error: unrecognized arguments: --load_model sst_clf_16.pt

    opened by crazylazylife 2
  • Unable to run train.py Keras retinanet in Ubuntu on Remote server

    Unable to run train.py Keras retinanet in Ubuntu on Remote server

    python3 train.py csv home/shpriya/dataset/dataset/Annotations.csv home/shpriya/dataset/dataset/Class.csv Erro is as follows; train.py: error: unrecognized arguments: csv home/shpriya/dataset/dataset/Annotations.csv home/shpriya/dataset/dataset/Class.csv

    This is not unrecognized argument I am following github link https://github.com/fizyr/keras-retinanet and as per instructions

    Running directly from the repository:

    keras_retinanet/bin/train.py csv /path/to/csv/file/containing/annotations /path/to/csv/file/containing/classes

    Please help me! I am stuck!

    opened by shreyapriya700 1
Releases(v0.3.large_batch_stable)
  • v0.3.large_batch_stable(Dec 14, 2018)

  • v0.3(Apr 6, 2018)

    We've switched our mLSTM model to internally used PyTorch's fused LSTM cell which provides significantly improved GPU memory usage (allowing for larger batch size training) and slight improvements to speed compared to the unfused version we had included in earlier versions.

    In order to convert any models you've trained in the past to be usable with this version, please see this issue.

    We've also updated our distributed code to address the recent April 3rd changes made to PyTorch's Tensors and Variables.

    Source code(tar.gz)
    Source code(zip)
  • v0.2(Mar 13, 2018)

    Our main goal with this release is two-fold:

    • address concerns around usability
    • Update repo with new code for FP16, distributed training

    Usability

    • We've brought our training/generation code more in line with the pytorch word language model example
    • Provide PyTorch classifier module/function for classifying sentiment from input text tensor
      • Provide pretrained classifiers/language models for this module
      • Provide simple standalone classifier script/example capable of classifying an input csv/json and writing results to other csv/jsons
    • Flattening our directory structure to make code easier to find
    • Putting reusable PyTorch functionality (new RNN api, weight norm functionality, eventually all fp16 functionality) in its own standalone python module to be published at a later date

    FP16 + Distributed

    • FP16 optimizer wrapper for optimizating FP16 models according to our [best practices] (https://github.com/NVIDIA/sentiment-discovery/blob/master/analysis/reproduction.md#fp16-training)
      • available in fp16/fp16.py
    • Lightweight distributed wrapper for all reducing gradients across multiple gpus with either nccl or gloo backends
      • model/distributed.py
    • distributed worker launch script
      • multiproc.py
    Source code(tar.gz)
    Source code(zip)
  • v0.1(Dec 12, 2017)

    Module updates

    • Fused LSTM kernels in mLSTM module with fuse_lstm flags Model updates
    • improved model serialization size and options
      • no saving of gradients
      • saving optimizer is optional
      • reloading weights trained with weight norm is more stable Weight Norm/Reparameterization update
    • modified hooks to work with fused LSTM kernel Data updates
    • Parses dataset types (csv, json, etc) automatically. Only need to specify supervised vs unsupervised
    • Added loose json functionality
    • Tested csv datasets more thoroughly
    • Save Names of processed results fixed so that original file's name stays the same now.
    • Fixed DataParallel/DistributedDP batching of evaluation datasets
    • Made it easier to specify validation/test datasets
    • Made it easier to specify dataset shards
    • Added negative sequence lengths for datasets.
    Source code(tar.gz)
    Source code(zip)
Owner
NVIDIA Corporation
NVIDIA Corporation
Tensorflow implementation of paper: Learning to Diagnose with LSTM Recurrent Neural Networks.

Multilabel time series classification with LSTM Tensorflow implementation of model discussed in the following paper: Learning to Diagnose with LSTM Re

Aaqib 552 Nov 28, 2022
Sploitus - Command line search tool for sploitus.com. Think searchsploit, but with more POCs

Sploitus Command line search tool for sploitus.com. Think searchsploit, but with

watchdog2000 5 Mar 07, 2022
The Sudachi synonym dictionary in Solar format.

solr-sudachi-synonyms The Sudachi synonym dictionary in Solar format. Summary Run a script that checks for updates to the Sudachi dictionary every hou

Karibash 3 Aug 19, 2022
kochat

Kochat 챗봇 빌더는 성에 안차고, 자신만의 딥러닝 챗봇 애플리케이션을 만드시고 싶으신가요? Kochat을 이용하면 손쉽게 자신만의 딥러닝 챗봇 애플리케이션을 빌드할 수 있습니다. # 1. 데이터셋 객체 생성 dataset = Dataset(ood=True) #

1 Oct 25, 2021
Mlcode - Continuous ML API Integrations

mlcode Basic APIs for ML applications. Django REST Application Contains REST API

Sujith S 1 Jan 01, 2022
Every Google, Azure & IBM text to speech voice for free

TTS-Grabber Quick thing i made about a year ago to download any text with any tts voice, over 630 voices to choose from currently. It will split the i

16 Dec 07, 2022
Translators - is a library which aims to bring free, multiple, enjoyable translation to individuals and students in Python

Translators - is a library which aims to bring free, multiple, enjoyable translation to individuals and students in Python

UlionTse 907 Dec 27, 2022
Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

RAMI ALRFOU 2.1k Jan 07, 2023
SDL: Synthetic Document Layout dataset

SDL is the project that synthesizes document images. It facilitates multiple-level labeling on document images and can generate in multiple languages.

Sơn Nguyễn 0 Oct 07, 2021
KR-FinBert And KR-FinBert-SC

KR-FinBert & KR-FinBert-SC Much progress has been made in the NLP (Natural Language Processing) field, with numerous studies showing that domain adapt

5 Jul 29, 2022
HAN2HAN : Hangul Font Generation

HAN2HAN : Hangul Font Generation

Changwoo Lee 36 Dec 28, 2022
Takes a string and puts it through different languages in Google Translate a requested amount of times, returning nonsense.

PythonTextObfuscator Takes a string and puts it through different languages in Google Translate a requested amount of times, returning nonsense. Requi

2 Aug 29, 2022
Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec

Wake Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec Abstract استخراج خودکار کلمات کلیدی متون کوتاه فارسی با استفاده از word2vec ب

Omid Hajipoor 1 Dec 17, 2021
A modular Karton Framework service that unpacks common packers like UPX and others using the Qiling Framework.

Unpacker Karton Service A modular Karton Framework service that unpacks common packers like UPX and others using the Qiling Framework. This project is

c3rb3ru5 45 Jan 05, 2023
Shellcode antivirus evasion framework

Schrodinger's Cat Schrodinger'sCat is a Shellcode antivirus evasion framework Technical principle Please visit my blog https://idiotc4t.com/ How to us

idiotc4t 27 Jul 09, 2022
PyTorch impelementations of BERT-based Spelling Error Correction Models.

PyTorch impelementations of BERT-based Spelling Error Correction Models

Heng Cai 209 Dec 30, 2022
Deal or No Deal? End-to-End Learning for Negotiation Dialogues

Introduction This is a PyTorch implementation of the following research papers: (1) Hierarchical Text Generation and Planning for Strategic Dialogue (

Facebook Research 1.4k Dec 29, 2022
A crowdsourced dataset of dialogues grounded in social contexts involving utilization of commonsense.

A crowdsourced dataset of dialogues grounded in social contexts involving utilization of commonsense.

Alexa 62 Dec 20, 2022
基于pytorch+bert的中文事件抽取

pytorch_bert_event_extraction 基于pytorch+bert的中文事件抽取,主要思想是QA(问答)。 要预先下载好chinese-roberta-wwm-ext模型,并在运行时指定模型的位置。

西西嘛呦 31 Nov 30, 2022