Generalizing Gaze Estimation with Outlier-guided Collaborative Adaptation

Related tags

Deep LearningPnP-GA
Overview

Generalizing Gaze Estimation with Outlier-guided Collaborative Adaptation

Python 3.6 Pytorch 1.5.0 CUDA 10.2 License CC BY-NC

Our paper is accepted by ICCV2021.

Teaser

Picture: Overview of the proposed Plug-and-Play (PnP) adaption framework for generalizing gaze estimation to a new domain.

Main image

Picture: The proposed architecture.


Results

Input Method DE→DM DE→DD DG→DM DG→DD
Face Baseline 8.767 8.578 7.662 8.977
Face Baseline + PnP-GA 5.529 ↓36.9% 5.867 ↓31.6% 6.176 ↓19.4% 7.922 ↓11.8%
Face ResNet50 8.017 8.310 8.328 7.549
Face ResNet50 + PnP-GA 6.000 ↓25.2% 6.172 ↓25.7% 5.739 ↓31.1% 7.042 ↓6.7%
Face SWCNN 10.939 24.941 10.021 13.473
Face SWCNN + PnP-GA 8.139 ↓25.6% 15.794 ↓36.7% 8.740 ↓12.8% 11.376 ↓15.6%
Face + Eye CA-Net -- -- 21.276 30.890
Face + Eye CA-Net + PnP-GA -- -- 17.597 ↓17.3% 16.999 ↓44.9%
Face + Eye Dilated-Net -- -- 16.683 18.996
Face + Eye Dilated-Net + PnP-GA -- -- 15.461 ↓7.3% 16.835 ↓11.4%

This repository contains the official PyTorch implementation of the following paper:

Generalizing Gaze Estimation with Outlier-guided Collaborative Adaptation
Yunfei Liu, Ruicong Liu, Haofei Wang, Feng Lu

Abstract: Deep neural networks have significantly improved appearance-based gaze estimation accuracy. However, it still suffers from unsatisfactory performance when generalizing the trained model to new domains, e.g., unseen environments or persons. In this paper, we propose a plugand-play gaze adaptation framework (PnP-GA), which is an ensemble of networks that learn collaboratively with the guidance of outliers. Since our proposed framework does not require ground-truth labels in the target domain, the existing gaze estimation networks can be directly plugged into PnP-GA and generalize the algorithms to new domains. We test PnP-GA on four gaze domain adaptation tasks, ETH-to-MPII, ETH-to-EyeDiap, Gaze360-to-MPII, and Gaze360-to-EyeDiap. The experimental results demonstrate that the PnP-GA framework achieves considerable performance improvements of 36.9%, 31.6%, 19.4%, and 11.8% over the baseline system. The proposed framework also outperforms the state-of-the-art domain adaptation approaches on gaze domain adaptation tasks.

Resources

Material related to our paper is available via the following links:

System requirements

  • Only Linux is tested, Windows is under test.
  • 64-bit Python 3.6 installation.

Playing with pre-trained networks and training

Config

You need to modify the config.yaml first, especially xxx/image, xxx/label, and xxx_pretrains params.

xxx/image represents the path of label file.

xxx/root represents the path of image file.

xxx_pretrains represents the path of pretrained models.

A example of label file is data folder. Each line in label file is conducted as:

p00/face/1.jpg 0.2558059438789034,-0.05467275933864655 -0.05843388117618364,0.46745964684693614 ... ...

Where our code reads image data form os.path.join(xxx/root, "p00/face/1.jpg") and reads ground-truth labels of gaze direction from the rest in label file.

Train

We provide three optional arguments, which are --oma2, --js and --sg. They repersent three different network components, which could be found in our paper.

--source and --target represent the datasets used as the source domain and the target domain. You can choose among eth, gaze360, mpii, edp.

--i represents the index of person which is used as the training set. You can set it as -1 for using all the person as the training set.

--pics represents the number of target domain samples for adaptation.

We also provide other arguments for adjusting the hyperparameters in our PnP-GA architecture, which could be found in our paper.

For example, you can run the code like:

python3 adapt.py --i 0 --pics 10 --savepath path/to/save --source eth --target mpii --gpu 0 --js --oma2 --sg

Test

--i, --savepath, --target are the same as training.

--p represents the index of person which is used as the training set in the adaptation process.

For example, you can run the code like:

python3 test.py --i -1 --p 0 --savepath path/to/save --target mpii

Citation

If you find this work or code is helpful in your research, please cite:

@inproceedings{liu2021PnP_GA,
  title={Generalizing Gaze Estimation with Outlier-guided Collaborative Adaptation},
  author={Liu, Yunfei and Liu, Ruicong and Wang, Haofei and Lu, Feng},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2021}
}

Contact

If you have any questions, feel free to E-mail me via: lyunfei(at)buaa.edu.cn

Owner
Yunfei Liu
;-)
Yunfei Liu
Official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels".

WarPI The official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels". Run python main.py --corruption_type

Haoliang Sun 3 Sep 03, 2022
Language Models for the legal domain in Spanish done @ BSC-TEMU within the "Plan de las Tecnologías del Lenguaje" (Plan-TL).

Spanish legal domain Language Model ⚖️ This repository contains the page for two main resources for the Spanish legal domain: A RoBERTa model: https:/

Plan de Tecnologías del Lenguaje - Gobierno de España 12 Nov 14, 2022
This is a repository with the code for the ACL 2019 paper

The Story of Heads This is the official repo for the following papers: (ACL 2019) Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy

231 Nov 15, 2022
Cluttered MNIST Dataset

Cluttered MNIST Dataset A setup script will download MNIST and produce mnist/*.t7 files: luajit download_mnist.lua Example usage: local mnist_clutter

DeepMind 50 Jul 12, 2022
PyTorch implementation of probabilistic deep forecast applied to air quality.

Probabilistic Deep Forecast PyTorch implementation of a paper, titled: Probabilistic Deep Learning to Quantify Uncertainty in Air Quality Forecasting

Abdulmajid Murad 13 Nov 16, 2022
Using Language Model to Bootstrap Human Activity Recognition Ambient Sensors Based in Smart Homes

Using Language Model to Bootstrap Human Activity Recognition Ambient Sensors Based in Smart Homes This repository is the official implementation of Us

Damien Bouchabou 0 Oct 18, 2021
Neural Oblivious Decision Ensembles

Neural Oblivious Decision Ensembles A supplementary code for anonymous ICLR 2020 submission. What does it do? It learns deep ensembles of oblivious di

25 Sep 21, 2022
StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

3k Jan 08, 2023
USAD - UnSupervised Anomaly Detection on multivariate time series

USAD - UnSupervised Anomaly Detection on multivariate time series Scripts and utility programs for implementing the USAD architecture. Implementation

116 Jan 04, 2023
The code for paper "Learning Implicit Fields for Generative Shape Modeling".

implicit-decoder The tensorflow code for paper "Learning Implicit Fields for Generative Shape Modeling", Zhiqin Chen, Hao (Richard) Zhang. Project pag

Zhiqin Chen 353 Dec 30, 2022
An Exact Solver for Semi-supervised Minimum Sum-of-Squares Clustering

PC-SOS-SDP: an Exact Solver for Semi-supervised Minimum Sum-of-Squares Clustering PC-SOS-SDP is an exact algorithm based on the branch-and-bound techn

Antonio M. Sudoso 1 Nov 13, 2022
This repository contains PyTorch code for Robust Vision Transformers.

This repository contains PyTorch code for Robust Vision Transformers.

117 Dec 07, 2022
WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution

WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution This code belongs to the paper [1] available at https://arx

Fabian Altekrueger 5 Jun 02, 2022
Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral

Temporally Efficient Vision Transformer for Video Instance Segmentation Temporally Efficient Vision Transformer for Video Instance Segmentation (CVPR

Hust Visual Learning Team 203 Dec 31, 2022
Apply AnimeGAN-v2 across frames of a video clip

title emoji colorFrom colorTo sdk app_file pinned AnimeGAN-v2 For Videos 🔥 blue red gradio app.py false AnimeGAN-v2 For Videos Apply AnimeGAN-v2 acro

Nathan Raw 36 Oct 18, 2022
Local Multi-Head Channel Self-Attention for FER2013

LHC-Net Local Multi-Head Channel Self-Attention This repository is intended to provide a quick implementation of the LHC-Net and to replicate the resu

12 Jan 04, 2023
This is our ARTS test set, an enriched test set to probe Aspect Robustness of ABSA.

This is the repository for our 2020 paper "Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis". Data We provide

35 Nov 16, 2022
Rewrite ultralytics/yolov5 v6.0 opencv inference code based on numpy, no need to rely on pytorch

Rewrite ultralytics/yolov5 v6.0 opencv inference code based on numpy, no need to rely on pytorch; pre-processing and post-processing using numpy instead of pytroch.

炼丹去了 21 Dec 12, 2022
PiCIE: Unsupervised Semantic Segmentation using Invariance and Equivariance in clustering (CVPR2021)

PiCIE: Unsupervised Semantic Segmentation using Invariance and Equivariance in Clustering Jang Hyun Cho1, Utkarsh Mall2, Kavita Bala2, Bharath Harihar

Jang Hyun Cho 164 Dec 30, 2022
(ICONIP 2020) MobileHand: Real-time 3D Hand Shape and Pose Estimation from Color Image

MobileHand: Real-time 3D Hand Shape and Pose Estimation from Color Image This repo contains the source code for MobileHand, real-time estimation of 3D

90 Dec 12, 2022