Skip to content

Pytorch implementation of winner from VQA Chllange Workshop in CVPR'17

Notifications You must be signed in to change notification settings

markdtw/vqa-winner-cvprw-2017

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

2017 VQA Challenge Winner (CVPR'17 Workshop)

pytorch implementation of Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge by Teney et al.

Model architecture

Prerequisites

Data

Preparation

  • To download and extract vqav2, glove, and pretrained visual features:
    bash scripts/download_extract.sh
  • To prepare data for training:
    python scripts/preproc.py
  • The structure of data/ directory should look like this:
    - data/
      - zips/
        - v2_XXX...zip
        - ...
        - glove...zip
        - trainval_36.zip
      - glove/
        - glove...txt
        - ...
      - v2_XXX.json
      - ...
      - trainval_resnet...tsv
      (The above are files created after executing scripts/download_extract.sh)
      - tokenizers/
        - ...
      - dict_ans.pkl
      - dict_q.pkl
      - glove_pretrained_300.npy
      - train_qa.pkl
      - val_qa.pkl
      - train_vfeats.pkl
      - val_vfeats.pkl
      (The above are files created after executing scripts/preproc.py)
    

Train

Use default parameters:

bash scripts/train.sh

Notes

  • Huge re-factor (especially data preprocessing), tested based on pytorch 0.4.1 and python 3.6
  • Training for 20 epochs reach around 50% training accuracy. (model seems buggy in my implementation)
  • After all the preprocessing, data/ directory may be up to 38G+
  • Some of preproc.py and utils.py are based on this repo

Resources

About

Pytorch implementation of winner from VQA Chllange Workshop in CVPR'17

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published