face property detection pytorch

Overview

face-property-detection-pytorch

Python Python torch

1. Data structure

The structure of landmarks_jpg is like below:

|--celeba1
|----celeba_face
|------000001.jpg
|------000002.jpg
|------ .....
|------020000.jpg
|----celeba_raw_pic
|------000001.jpg
|------000002.jpg
|------ .....
|------020000.jpg

The celeba_raw_pic is the original picture that we do not make any processing. The celeba_face is the face region of the raw pricture.

img2.png

figure1: raw picture

img1.png

figure2: face region of raw picture

You can run the below command to finish the data processing.

python3 create_data.py 

This command will use MTCNN model to extract the face region. However, some pictures cannot be extracted by the model. For my test, I can not cut out the face region of the below picture.

# file 000199.jpg cannot detect face
# file 001401.jpg cannot detect face
# file 002214.jpg cannot detect face
# file 002432.jpg cannot detect face
# file 002920.jpg cannot detect face
# file 003928.jpg cannot detect face
# file 003946.jpg cannot detect face
# file 004932.jpg cannot detect face
# file 005283.jpg cannot detect face
# file 006010.jpg cannot detect face
# file 006531.jpg cannot detect face
# file 007726.jpg cannot detect face
# file 008287.jpg cannot detect face
# file 011529.jpg cannot detect face
# file 011793.jpg cannot detect face
# file 013374.jpg cannot detect face
# file 013654.jpg cannot detect face
# file 014999.jpg cannot detect face
# file 016530.jpg cannot detect face
# file 016797.jpg cannot detect face
# file 017282.jpg cannot detect face
# file 017586.jpg cannot detect face
# file 018309.jpg cannot detect face
# file 018599.jpg cannot detect face
# file 018884.jpg cannot detect face
# file 019205.jpg cannot detect face
# file 019377.jpg cannot detect face

So I replace them with 000001.jpg. Also, I revise the label file list_attr_celeba.txt. Replace the issue items with 000001.jpg and I get the list_attr_celeba-face.txt You can use BeyondCompare to diff the changes that I make img.png

You can download the data from the cloud drive:

name link
celeba_face.zip https://pan.baidu.com/s/15nsbvla8eCy_n3EsUMH36Q code:5ipn
celeba_raw_pic.zip https://pan.baidu.com/s/1WM3Zo3zLfKsAFvrDl03suQ code:3q70

2. how to train

First, install the third-party package:

pip install -r requirements.txt

Then just simply run the below command:

python3 train.py

if you want to use the pretrained models, you can revise the below code as you need:

load_pretrain_model = False
model_dir=r".\pretrain_models\model-resnet-50-justface-state.ptn"
if load_pretrain_model:
    checkpoint = torch.load(model_dir)
    model.load_state_dict(checkpoint)

3. how to test

Revise the test file name in predict.py and then run the below command:

python3 predict.py
Owner
i am x
i am x
PyTorch source code for Distilling Knowledge by Mimicking Features

LSHFM.detection This is the PyTorch source code for Distilling Knowledge by Mimicking Features. And this project contains code for object detection wi

Guo-Hua Wang 4 Dec 17, 2022
PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner [Li et al., 2020].

VGPL-Visual-Prior PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner (VGPL). Give

Toru 8 Dec 29, 2022
[CVPR 2022] PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision (Oral)

PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision Kehong Gong*, Bingbing Li*, Jianfeng Zhang*, Ta

256 Dec 28, 2022
Official Code for AdvRush: Searching for Adversarially Robust Neural Architectures (ICCV '21)

AdvRush Official Code for AdvRush: Searching for Adversarially Robust Neural Architectures (ICCV '21) Environmental Set-up Python == 3.6.12, PyTorch =

11 Dec 10, 2022
Official code repository for Continual Learning In Environments With Polynomial Mixing Times

Official code for Continual Learning In Environments With Polynomial Mixing Times Continual Learning in Environments with Polynomial Mixing Times This

Sharath Raparthy 1 Dec 19, 2021
Supplementary materials for ISMIR 2021 LBD paper "Evaluation of Latent Space Disentanglement in the Presence of Interdependent Attributes"

Evaluation of Latent Space Disentanglement in the Presence of Interdependent Attributes Supplementary materials for ISMIR 2021 LBD submission: K. N. W

Karn Watcharasupat 2 Oct 25, 2021
Project page for our ICCV 2021 paper "The Way to my Heart is through Contrastive Learning"

The Way to my Heart is through Contrastive Learning: Remote Photoplethysmography from Unlabelled Video This is the official project page of our ICCV 2

36 Jan 06, 2023
Implementation of the method described in the Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

4 Mar 11, 2022
Efficiently Disentangle Causal Representations

Efficiently Disentangle Causal Representations Install dependency pip install -r requirements.txt Main experiments Causality direction prediction cd

4 Apr 01, 2022
[ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang

Self-Damaging Contrastive Learning Introduction The recent breakthrough achieved by contrastive learning accelerates the pace for deploying unsupervis

VITA 51 Dec 29, 2022
Coded illumination for improved lensless imaging

CodedCam Coded Illumination for Improved Lensless Imaging Paper | Supplementary results | Data and Code are available. Coded illumination for improved

Computational Sensing and Information Processing Lab 1 Nov 29, 2021
BboxToolkit is a tiny library of special bounding boxes.

BboxToolkit is a light codebase collecting some practical functions for the special-shape detection, such as oriented detection

jbwang1997 73 Jan 01, 2023
Pretraining Representations For Data-Efficient Reinforcement Learning

Pretraining Representations For Data-Efficient Reinforcement Learning Max Schwarzer, Nitarshan Rajkumar, Michael Noukhovitch, Ankesh Anand, Laurent Ch

Mila 40 Dec 11, 2022
This repo contains the source code and a benchmark for predicting user's utilities with Machine Learning techniques for Computational Persuasion

Machine Learning for Argument-Based Computational Persuasion This repo contains the source code and a benchmark for predicting user's utilities with M

Ivan Donadello 4 Nov 07, 2022
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Kingdrone 174 Dec 22, 2022
Code for the upcoming CVPR 2021 paper

The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth Jamie Watson, Oisin Mac Aodha, Victor Prisacariu, Gabriel J. Brostow and Michael

Niantic Labs 496 Dec 30, 2022
Rule Extraction Methods for Interactive eXplainability

REMIX: Rule Extraction Methods for Interactive eXplainability This repository contains a variety of tools and methods for extracting interpretable rul

Mateo Espinosa Zarlenga 21 Jan 03, 2023
Official implementation of the Neurips 2021 paper Searching Parameterized AP Loss for Object Detection.

Parameterized AP Loss By Chenxin Tao, Zizhang Li, Xizhou Zhu, Gao Huang, Yong Liu, Jifeng Dai This is the official implementation of the Neurips 2021

46 Jul 06, 2022
Repository to run object detection on a model trained on an autonomous driving dataset.

Autonomous Driving Object Detection on the Raspberry Pi 4 Description of Repository This repository contains code and instructions to configure the ne

Ethan 51 Nov 17, 2022
A decent AI that solves daily Wordle puzzles. Works with different websites with similar wordlists,.

Wordle-AI A decent AI that solves daily "Wordle" puzzles. Works with different websites with similar wordlists. When prompted with "Word:" enter the w

Ethan 1 Feb 10, 2022