InsightFace: 2D and 3D Face Analysis Project on MXNet and PyTorch

Overview

InsightFace: 2D and 3D Face Analysis Project

By Jia Guo and Jiankang Deng

Top News

2021-06-05: We launch a Masked Face Recognition Challenge & Workshop on ICCV 2021.

2021-05-15: We released an efficient high accuracy face detection approach called SCRFD.

2021-04-18: We achieved Rank-4th on NIST-FRVT 1:1, see leaderboard.

2021-03-13: We have released our official ArcFace PyTorch implementation, see here.

License

The code of InsightFace is released under the MIT License. There is no limitation for both academic and commercial usage.

The training data containing the annotation (and the models trained with these data) are available for non-commercial research purposes only.

Introduction

InsightFace is an open source 2D&3D deep face analysis toolbox, mainly based on MXNet and PyTorch.

The master branch works with MXNet 1.2 to 1.6, PyTorch 1.6+, with Python 3.x.

ArcFace Video Demo

ArcFace Demo

Please click the image to watch the Youtube video. For Bilibili users, click here.

Recent Update

2021-06-05: We launch a Masked Face Recognition Challenge & Workshop on ICCV 2021.

2021-05-15: We released an efficient high accuracy face detection approach called SCRFD.

2021-04-18: We achieved Rank-4th on NIST-FRVT 1:1, see leaderboard.

2021-03-13: We have released our official ArcFace PyTorch implementation, see here.

2021-03-09: Tips for training large-scale face recognition model, such as millions of IDs(classes).

2021-02-21: We provide a simple face mask renderer here which can be used as a data augmentation tool while training face recognition models.

2021-01-20: OneFlow based implementation of ArcFace and Partial-FC, here.

2020-10-13: A new training method and one large training set(360K IDs) were released here by DeepGlint.

2020-10-09: We opened a large scale recognition test benchmark IFRT

2020-08-01: We released lightweight facial landmark models with fast coordinate regression(106 points). See detail here.

2020-04-27: InsightFace pretrained models and MS1M-Arcface are now specified as the only external training dataset, for iQIYI iCartoonFace challenge, see detail here.

2020.02.21: Instant discussion group created on QQ with group-id: 711302608. For English developers, see install tutorial here.

2020.02.16: RetinaFace now can detect faces with mask, for anti-CoVID19, see detail here

2019.08.10: We achieved 2nd place at WIDER Face Detection Challenge 2019.

2019.05.30: Presentation at cvmart

2019.04.30: Our Face detector (RetinaFace) obtains state-of-the-art results on the WiderFace dataset.

2019.04.14: We will launch a Light-weight Face Recognition challenge/workshop on ICCV 2019.

2019.04.04: Arcface achieved state-of-the-art performance (7/109) on the NIST Face Recognition Vendor Test (FRVT) (1:1 verification) report (name: Imperial-000 and Imperial-001). Our solution is based on [MS1MV2+DeepGlintAsian, ResNet100, ArcFace loss].

2019.02.08: Please check https://github.com/deepinsight/insightface/tree/master/recognition/ArcFace for our parallel training code which can easily and efficiently support one million identities on a single machine (8* 1080ti).

2018.12.13: Inference acceleration TVM-Benchmark.

2018.10.28: Light-weight attribute model Gender-Age. About 1MB, 10ms on single CPU core. Gender accuracy 96% on validation set and 4.1 age MAE.

2018.10.16: We achieved state-of-the-art performance on Trillionpairs (name: nttstar) and IQIYI_VID (name: WitcheR).

Contents

Deep Face Recognition

Face Detection

Face Alignment

Citation

Contact

Deep Face Recognition

Introduction

In this module, we provide training data, network settings and loss designs for deep face recognition. The training data includes, but not limited to the cleaned MS1M, VGG2 and CASIA-Webface datasets, which were already packed in MXNet binary format. The network backbones include ResNet, MobilefaceNet, MobileNet, InceptionResNet_v2, DenseNet, etc.. The loss functions include Softmax, SphereFace, CosineFace, ArcFace, Sub-Center ArcFace and Triplet (Euclidean/Angular) Loss.

You can check the detail page of our work ArcFace(which accepted in CVPR-2019) and SubCenter-ArcFace(which accepted in ECCV-2020).

margin penalty for target logit

Our method, ArcFace, was initially described in an arXiv technical report. By using this module, you can simply achieve LFW 99.83%+ and Megaface 98%+ by a single model. This module can help researcher/engineer to develop deep face recognition algorithms quickly by only two steps: download the binary dataset and run the training script.

Training Data

All face images are aligned by ficial five landmarks and cropped to 112x112:

Please check Dataset-Zoo for detail information and dataset downloading.

  • Please check recognition/tools/face2rec2.py on how to build a binary face dataset. You can either choose MTCNN or RetinaFace to align the faces.

Train

  1. Install MXNet with GPU support (Python 3.X).
pip install mxnet-cu101 # which should match your installed cuda version
  1. Clone the InsightFace repository. We call the directory insightface as INSIGHTFACE_ROOT.
git clone --recursive https://github.com/deepinsight/insightface.git
  1. Download the training set (MS1M-Arcface) and place it in $INSIGHTFACE_ROOT/recognition/datasets/. Each training dataset includes at least following 6 files:
    faces_emore/
       train.idx
       train.rec
       property
       lfw.bin
       cfp_fp.bin
       agedb_30.bin

The first three files are the training dataset while the last three files are verification sets.

  1. Train deep face recognition models. In this part, we assume you are in the directory $INSIGHTFACE_ROOT/recognition/ArcFace/.

Place and edit config file:

cp sample_config.py config.py
vim config.py # edit dataset path etc..

We give some examples below. Our experiments were conducted on the Tesla P40 GPU.

(1). Train ArcFace with LResNet100E-IR.

CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --network r100 --loss arcface --dataset emore

It will output verification results of LFW, CFP-FP and AgeDB-30 every 2000 batches. You can check all options in config.py. This model can achieve LFW 99.83+ and MegaFace 98.3%+.

(2). Train CosineFace with LResNet50E-IR.

CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --network r50 --loss cosface --dataset emore

(3). Train Softmax with LMobileNet-GAP.

CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --network m1 --loss softmax --dataset emore

(4). Fine-turn the above Softmax model with Triplet loss.

CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --network m1 --loss triplet --lr 0.005 --pretrained ./models/m1-softmax-emore,1

(5). Training in model parallel acceleration.

CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train_parall.py --network r100 --loss arcface --dataset emore
  1. Verification results.

LResNet100E-IR network trained on MS1M-Arcface dataset with ArcFace loss:

Method LFW(%) CFP-FP(%) AgeDB-30(%)
Ours 99.80+ 98.0+ 98.20+

Pretrained Models

You can use $INSIGHTFACE_ROOT/recognition/arcface_torch/eval/verification.py to test all the pre-trained models.

Please check Model-Zoo for more pretrained models.

Verification Results on Combined Margin

A combined margin method was proposed as a function of target logits value and original θ:

COM(θ) = cos(m_1*θ+m_2) - m_3

For training with m1=1.0, m2=0.3, m3=0.2, run following command:

CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --network r100 --loss combined --dataset emore

Results by using MS1M-IBUG(MS1M-V1)

Method m1 m2 m3 LFW CFP-FP AgeDB-30
W&F Norm Softmax 1 0 0 99.28 88.50 95.13
SphereFace 1.5 0 0 99.76 94.17 97.30
CosineFace 1 0 0.35 99.80 94.4 97.91
ArcFace 1 0.5 0 99.83 94.04 98.08
Combined Margin 1.2 0.4 0 99.80 94.08 98.05
Combined Margin 1.1 0 0.35 99.81 94.50 98.08
Combined Margin 1 0.3 0.2 99.83 94.51 98.13
Combined Margin 0.9 0.4 0.15 99.83 94.20 98.16

Test on MegaFace

Please check $INSIGHTFACE_ROOT/evaluation/megaface/ to evaluate the model accuracy on Megaface. All aligned images were already provided.

512-D Feature Embedding

In this part, we assume you are in the directory $INSIGHTFACE_ROOT/deploy/. The input face image should be generally centre cropped. We use RNet+ONet of MTCNN to further align the image before sending it to the feature embedding network.

  1. Prepare a pre-trained model.
  2. Put the model under $INSIGHTFACE_ROOT/models/. For example, $INSIGHTFACE_ROOT/models/model-r100-ii.
  3. Run the test script $INSIGHTFACE_ROOT/deploy/test.py.

For single cropped face image(112x112), total inference time is only 17ms on our testing server(Intel E5-2660 @ 2.00GHz, Tesla M40, LResNet34E-IR).

Third-party Re-implementation

Face Detection

RetinaFace

RetinaFace is a practical single-stage SOTA face detector which is initially introduced in arXiv technical report and then accepted by CVPR 2020. We provide training code, training dataset, pretrained models and evaluation scripts.

demoimg1

Please check RetinaFace for detail.

RetinaFaceAntiCov

RetinaFaceAntiCov is an experimental module to identify face boxes with masks. Please check RetinaFaceAntiCov for detail.

demoimg1

Face Alignment

DenseUNet

Please check the Menpo Benchmark and our Dense U-Net for detail. We also provide other network settings such as classic hourglass. You can find all of training code, training dataset and evaluation scripts there.

CoordinateReg

On the other hand, in contrast to heatmap based approaches, we provide some lightweight facial landmark models with fast coordinate regression. The input of these models is loose cropped face image while the output is the direct landmark coordinates. See detail at alignment-coordinateReg. Now only pretrained models available.

imagevis
videovis

Citation

If you find InsightFace useful in your research, please consider to cite the following related papers:

@inproceedings{deng2019retinaface,
title={RetinaFace: Single-stage Dense Face Localisation in the Wild},
author={Deng, Jiankang and Guo, Jia and Yuxiang, Zhou and Jinke Yu and Irene Kotsia and Zafeiriou, Stefanos},
booktitle={arxiv},
year={2019}
}

@inproceedings{guo2018stacked,
  title={Stacked Dense U-Nets with Dual Transformers for Robust Face Alignment},
  author={Guo, Jia and Deng, Jiankang and Xue, Niannan and Zafeiriou, Stefanos},
  booktitle={BMVC},
  year={2018}
}

@article{deng2018menpo,
  title={The Menpo benchmark for multi-pose 2D and 3D facial landmark localisation and tracking},
  author={Deng, Jiankang and Roussos, Anastasios and Chrysos, Grigorios and Ververas, Evangelos and Kotsia, Irene and Shen, Jie and Zafeiriou, Stefanos},
  journal={IJCV},
  year={2018}
}

@inproceedings{deng2018arcface,
title={ArcFace: Additive Angular Margin Loss for Deep Face Recognition},
author={Deng, Jiankang and Guo, Jia and Niannan, Xue and Zafeiriou, Stefanos},
booktitle={CVPR},
year={2019}
}

Contact

[Jia Guo](guojia[at]gmail.com)
[Jiankang Deng](jiankangdeng[at]gmail.com)
ONNX Command-Line Toolbox

ONNX Command Line Toolbox Aims to improve your experience of investigating ONNX models. Use it like onnx infershape /path/to/model.onnx. (See the usag

黎明灰烬 (王振华 Zhenhua WANG) 23 Nov 13, 2022
A pytorch implementation of MBNET: MOS PREDICTION FOR SYNTHESIZED SPEECH WITH MEAN-BIAS NETWORK

Pytorch-MBNet A pytorch implementation of MBNET: MOS PREDICTION FOR SYNTHESIZED SPEECH WITH MEAN-BIAS NETWORK Training To train a new model, please ru

46 Dec 28, 2022
Pytorch implementation of "MOSNet: Deep Learning based Objective Assessment for Voice Conversion"

MOSNet pytorch implementation of "MOSNet: Deep Learning based Objective Assessment for Voice Conversion" https://arxiv.org/abs/1904.08352 Dependency L

9 Nov 18, 2022
Python library for science observations from the James Webb Space Telescope

JWST Calibration Pipeline JWST requires Python 3.7 or above and a C compiler for dependencies. Linux and MacOS platforms are tested and supported. Win

Space Telescope Science Institute 386 Dec 30, 2022
A Probabilistic End-To-End Task-Oriented Dialog Model with Latent Belief States towards Semi-Supervised Learning

LABES This is the code for EMNLP 2020 paper "A Probabilistic End-To-End Task-Oriented Dialog Model with Latent Belief States towards Semi-Supervised L

17 Sep 28, 2022
This project provides the proof of the uniqueness of the equilibrium and the global asymptotic stability.

Delayed-cellular-neural-network This project provides the proof of the uniqueness of the equilibrium and the global asymptotic stability. There is als

4 Apr 28, 2022
Code for ICCV2021 paper SPEC: Seeing People in the Wild with an Estimated Camera

SPEC: Seeing People in the Wild with an Estimated Camera [ICCV 2021] SPEC: Seeing People in the Wild with an Estimated Camera, Muhammed Kocabas, Chun-

Muhammed Kocabas 187 Dec 26, 2022
Official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th ICML Workshop on AutoML)

Automated Learning Rate Scheduler for Large-Batch Training The official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th

Kakao Brain 35 Jan 04, 2023
Pytorch implementation of the AAAI 2022 paper "Cross-Domain Empirical Risk Minimization for Unbiased Long-tailed Classification"

[AAAI22] Cross-Domain Empirical Risk Minimization for Unbiased Long-tailed Classification We point out the overlooked unbiasedness in long-tailed clas

PatatiPatata 28 Oct 18, 2022
Repository for the AugmentedPCA Python package.

Overview This Python package provides implementations of Augmented Principal Component Analysis (AugmentedPCA) - a family of linear factor models that

Billy Carson 6 Dec 07, 2022
A keras implementation of ENet (abandoned for the foreseeable future)

ENet-keras This is an implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation, ported from ENet-training (lua-t

Pavlos 115 Nov 23, 2021
Get the partition that a file belongs and the percentage of space that consumes

tinos_eisai_sy Get the partition that a file belongs and the percentage of space that consumes (works only with OSes that use the df command) tinos_ei

Konstantinos Patronas 6 Jan 24, 2022
Ganilla - Official Pytorch implementation of GANILLA

GANILLA We provide PyTorch implementation for: GANILLA: Generative Adversarial Networks for Image to Illustration Translation. Paper Arxiv Updates (Fe

Samet Hi 462 Dec 05, 2022
Official implementation of the paper Do pedestrians pay attention? Eye contact detection for autonomous driving

Do pedestrians pay attention? Eye contact detection for autonomous driving Official implementation of the paper Do pedestrians pay attention? Eye cont

VITA lab at EPFL 26 Nov 02, 2022
The offcial repository for 'CharacterBERT and Self-Teaching for Improving the Robustness of Dense Retrievers on Queries with Typos', SIGIR2022

CharacterBERT-DR The offcial repository for CharacterBERT and Self-Teaching for Improving the Robustness of Dense Retrievers on Queries with Typos, Sh

ielab 11 Nov 15, 2022
This repository implements variational graph auto encoder by Thomas Kipf.

Variational Graph Auto-encoder in Pytorch This repository implements variational graph auto-encoder by Thomas Kipf. For details of the model, refer to

DaehanKim 215 Jan 02, 2023
Improving the robustness and performance of biomedical NLP models through adversarial training

RobustBioNLP Improving the robustness and performance of biomedical NLP models through adversarial training In this repository you can find suppliment

Milad Moradi 3 Sep 20, 2022
This is a Deep Leaning API for classifying emotions from human face and human audios.

Emotion AI This is a Deep Leaning API for classifying emotions from human face and human audios. Starting the server To start the server first you nee

crispengari 5 Oct 02, 2022
AirLoop: Lifelong Loop Closure Detection

AirLoop This repo contains the source code for paper: Dasong Gao, Chen Wang, Sebastian Scherer. "AirLoop: Lifelong Loop Closure Detection." arXiv prep

Chen Wang 53 Jan 03, 2023
Code release of paper "Deep Multi-View Stereo gone wild"

Deep MVS gone wild Pytorch implementation of "Deep MVS gone wild" (Paper | website) This repository provides the code to reproduce the experiments of

François Darmon 53 Dec 24, 2022