Global-Local Attention for Emotion Recognition

Overview

Global-Local Attention for Emotion Recognition

Requirements

  • Python 3
  • Install tensorflow (or tensorflow-gpu) >= 2.0.0
  • Install some other packages
pip install cython
pip install opencv-python==4.3.0.36 matplotlib numpy==1.18.5 dlib

Dataset

We provide the NCAER-S dataset with original images and extracted faces (a .txt file with 4 bounding box coordinate) in the NCAERS dataset.

The dataset can be downloaded at Google Drive

Note that the dataset and label should have structure like the followings:

NCAER-S 
│
└───images
│   │
│   └───class_1
│   │   │   img1.jpg
│   │   │   img2.jpg
│   │   │   ...
│   └───class_2
│       │   img1.jpg
│       │   img2.jpg
│       │   ...
│   
└───crop
│   │
│   └───class_1
│   │   │   img1.txt
│   │   │   img2.txt
│   │   │   ...
│   └───class_2
│       │   img1.txt
│       │   img2.txt
│       │   ...

Running

Our code supports these types of execution with argument -m or --mode:

#extract faces from <train, val or test> dataset (specified in config.py)
python run.py -m extract dataset_type=train

#train the model with config specified in the config.py
python run.py -m train 

#evaluate the trained model on the dataset <dataset_type>
python run.py -m eval --dataset_type=test --trained_weights=path/to/weights

Evaluation

Our trained model is available at weights/glamor-net/Model.

  • Firstly, please download the dataset and extract it into "data/" directory.
  • Then specified the path to the test data (images and crop):
config = config.copy({
    'test_images': 'path_to_test_images',
    'test_crop':   'path_to_test_cropped_faces' #(.txt files),
})
  • Run this command to evaluate the model. We are using the classification accuracy as our evaluation metric.
# Evaluate our model in the test set
python run.py -m eval --dataset_type=test --trained_weights=weights/glamor-net/Model

Training

Firstly please extract the faces from train set (val set is optional)

  • Specify the path to the dataset in config.py (train_images, val_images, test_images)
  • Specify the desired face-extracted output path in config.py (train_crop, val_crop, test_crop)
config = config.copy({

    'train_images': 'path_to_training_images',
    'train_crop':   'path_to_training_cropped_faces' #(.txt files),

    'val_images': 'path_to_validation_images',
    'val_crop':   'path_to_validation_cropped_faces' #(.txt files)

})
  • Perform face extraction on both dataset_type by running the commands:
python run.py -m extract --dataset_type=<train, val or test>

Start training:

# Train a new model from sratch
python run.py -m train 

# Continue training a model that you had trained earlier
python run.py -m train --resume=path/to/trained_weights

# Resume the last checkpoint model
python run.py -m train --resume=last

Prediction

We support prediction on single image or on images in a directory by running this command:

# Predict on single image
python predict.py --trained_weights=weights/glamor-net/Model --input=test_images/1.jpg --output=path/to/out/directory

# Predict on images in directory
python predict.py --trained_weights=weights/glamor-net/Model --input=test_images/ --output=out/

Use the help option to see a description of all available command line arguments

Owner
Minh Nhat Le
Hi
Minh Nhat Le
A system used to detect whether a person is wearing a medical mask or not.

Mask_Detection_System A system used to detect whether a person is wearing a medical mask or not. To open the program, please follow these steps: Make

Mohamed Emad 0 Nov 17, 2022
Code of paper "CDFI: Compression-Driven Network Design for Frame Interpolation", CVPR 2021

CDFI (Compression-Driven-Frame-Interpolation) [Paper] (Coming soon...) | [arXiv] Tianyu Ding*, Luming Liang*, Zhihui Zhu, Ilya Zharkov IEEE Conference

Tianyu Ding 95 Dec 04, 2022
Neural Magic Eye: Learning to See and Understand the Scene Behind an Autostereogram, arXiv:2012.15692.

Neural Magic Eye Preprint | Project Page | Colab Runtime Official PyTorch implementation of the preprint paper "NeuralMagicEye: Learning to See and Un

Zhengxia Zou 56 Jul 15, 2022
Pytorch implementation of the paper Improving Text-to-Image Synthesis Using Contrastive Learning

T2I_CL This is the official Pytorch implementation of the paper Improving Text-to-Image Synthesis Using Contrastive Learning Requirements Linux Python

42 Dec 31, 2022
The fastai deep learning library

Welcome to fastai fastai simplifies training fast and accurate neural nets using modern best practices Important: This documentation covers fastai v2,

fast.ai 23.2k Jan 07, 2023
This repository contains all the code and materials distributed in the 2021 Q-Programming Summer of Qode.

Q-Programming Summer of Qode This repository contains all the code and materials distributed in the Q-Programming Summer of Qode. If you want to creat

Sammarth Kumar 11 Jun 11, 2021
Reproducing code of hair style replacement method from Barbershorp.

Barbershorp Reproducing code of hair style replacement method from Barbershorp. Also reproduces II2S, an improved version of Image2StyleGAN. Requireme

1 Dec 24, 2021
SASM - simple crossplatform IDE for NASM, MASM, GAS and FASM assembly languages

SASM (SimpleASM) - простая кроссплатформенная среда разработки для языков ассемблера NASM, MASM, GAS, FASM с подсветкой синтаксиса и отладчиком. В SA

Dmitriy Manushin 5.6k Jan 06, 2023
[CVPR 2021] NormalFusion: Real-Time Acquisition of Surface Normals for High-Resolution RGB-D Scanning

NormalFusion: Real-Time Acquisition of Surface Normals for High-Resolution RGB-D Scanning Project Page | Paper | Supplemental material #1 | Supplement

KAIST VCLAB 49 Nov 24, 2022
PSANet: Point-wise Spatial Attention Network for Scene Parsing, ECCV2018.

PSANet: Point-wise Spatial Attention Network for Scene Parsing (in construction) by Hengshuang Zhao*, Yi Zhang*, Shu Liu, Jianping Shi, Chen Change Lo

Hengshuang Zhao 217 Oct 30, 2022
Geometric Deep Learning Extension Library for PyTorch

Documentation | Paper | Colab Notebooks | External Resources | OGB Examples PyTorch Geometric (PyG) is a geometric deep learning extension library for

Matthias Fey 16.5k Jan 08, 2023
[NeurIPS 2021] Garment4D: Garment Reconstruction from Point Cloud Sequences

Garment4D [PDF] | [OpenReview] | [Project Page] Overview This is the codebase for our NeurIPS 2021 paper Garment4D: Garment Reconstruction from Point

Fangzhou Hong 112 Dec 23, 2022
Plug and play transformer you can find network structure and official complete code by clicking List

Plug-and-play Module Plug and play transformer you can find network structure and official complete code by clicking List The following is to quickly

8 Mar 27, 2022
Video Instance Segmentation with a Propose-Reduce Paradigm (ICCV 2021)

Propose-Reduce VIS This repo contains the official implementation for the paper: Video Instance Segmentation with a Propose-Reduce Paradigm Huaijia Li

DV Lab 39 Nov 23, 2022
Pytorch re-implementation of Paper: SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text Recognition (CVPR 2022)

SwinTextSpotter This is the pytorch implementation of Paper: SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text R

mxin262 183 Jan 03, 2023
NeurIPS'21 Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows

NeurIPS'21 Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows This repo contains the code for the paper Tractable Densit

Layer6 Labs 4 Dec 12, 2022
MicroNet: Improving Image Recognition with Extremely Low FLOPs (ICCV 2021)

MicroNet: Improving Image Recognition with Extremely Low FLOPs (ICCV 2021) A pytorch implementation of MicroNet. If you use this code in your research

Yunsheng Li 293 Dec 28, 2022
Exploiting a Zoo of Checkpoints for Unseen Tasks

Exploiting a Zoo of Checkpoints for Unseen Tasks This repo includes code to reproduce all results in the above Neurips paper, authored by Jiaji Huang,

Baidu Research 8 Sep 06, 2022
A keras implementation of ENet (abandoned for the foreseeable future)

ENet-keras This is an implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation, ported from ENet-training (lua-t

Pavlos 115 Nov 23, 2021
Mix3D: Out-of-Context Data Augmentation for 3D Scenes (3DV 2021)

Mix3D: Out-of-Context Data Augmentation for 3D Scenes (3DV 2021) Alexey Nekrasov*, Jonas Schult*, Or Litany, Bastian Leibe, Francis Engelmann Mix3D is

Alexey Nekrasov 189 Dec 26, 2022