Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Related tags

Deep LearningPPR10K
Overview

Portrait Photo Retouching with PPR10K

Paper | Supplementary Material

PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency
Jie Liang*, Hui Zeng*, Miaomiao Cui, Xuansong Xie and Lei Zhang.
In CVPR 2021.

The proposed Portrait Photo Retouching dataset (PPR10K) is a large-scale and diverse dataset that contains:

  • 11,161 high-quality raw portrait photos (resolutions from 4K to 8K) in 1,681 groups;
  • 3 versions of manual retouched targets of all photos given by 3 expert retouchers;
  • full resolution human-region masks of all photos.

Samples

sample_images

Two example groups of photos from the PPR10K dataset. Top: the raw photos; Bottom: the retouched results from expert-a and the human-region masks. The raw photos exhibit poor visual quality and large variance in subject views, background contexts, lighting conditions and camera settings. In contrast, the retouched results demonstrate both good visual quality (with human-region priority) and group-level consistency.

This dataset is first of its kind to consider the two special and practical requirements of portrait photo retouching task, i.e., Human-Region Priority and Group-Level Consistency. Three main challenges are expected to be tackled in the follow-up researches:

  • Flexible and content-adaptive models for such a diverse task regarding both image contents and lighting conditions;
  • Highly efficient models to process practical resolution from 4K to 8K;
  • Robust and stable models to meet the requirement of group-level consistency.

Agreement

  • All files in the PPR10K dataset are available for non-commercial research purposes only.
  • You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data.

Overview

All data is hosted on GoogleDrive, OneDrive and 百度网盘 (验证码: mrwn):

Path Size Files Format Description
PPR10K-dataset 406 GB 176,072 Main folder
├  raw 313 GB 11,161 RAW All photos in raw format (.CR2, .NEF, .ARW, etc)
├  xmp_source 130 MB 11,161 XMP Default meta-file of the raw photos in CameraRaw, used in our data augmentation
├  xmp_target_a 130 MB 11,161 XMP CameraRaw meta-file of the raw photos recoding the full adjustments by expert a
├  xmp_target_b 130 MB 11,161 XMP CameraRaw meta-file of the raw photos recoding the full adjustments by expert b
├  xmp_target_c 130 MB 11,161 XMP CameraRaw meta-file of the raw photos recoding the full adjustments by expert c
├  masks_full 697 MB 11,161 PNG Full-resolution human-region masks in binary format
├  masks_360p 56 MB 11,161 PNG 360p human-region masks for fast training and validation
├  train_val_images_tif_360p 91 GB 97894 TIF 360p Source (16 bit tiff, with 5 versions of augmented images) and target (8 bit tiff) images for fast training and validation
├  pretrained_models 268 MB 12 PTH pretrained models for all 3 versions
└  hists 624KB 39 PNG Overall statistics of the dataset

One can directly use the 360p (of 540x360 or 360x540 resolution in sRGB color space) training and validation files (photos, 5 versions of augmented photos and the corresponding human-region masks) we have provided following the settings in our paper (train with the first 8,875 files and validate with the last 2286 files).
Also, see the instructions to customize your data (e.g., augment the training samples regarding illuminations and colors, get photos with higher or full resolutions).

Training and Validating the PPR using 3DLUT

Installation

  • Clone this repo.
git clone https://github.com/csjliang/PPR10K
cd PPR10K/code_3DLUT/
  • Install dependencies.
pip install -r requirements.txt
  • Build. Modify the CUDA path in trilinear_cpp/setup.sh adaptively and
cd trilinear_cpp
sh trilinear_cpp/setup.sh

Training

  • Training without HRP and GLC strategy, save models:
python train.py --data_path [path_to_dataset] --gpu_id [gpu_id] --use_mask False --output_dir [path_to_save_models]
  • Training with HRP and without GLC strategy, save models:
python train.py --data_path [path_to_dataset] --gpu_id [gpu_id] --use_mask True --output_dir [path_to_save_models]
  • Training without HRP and with GLC strategy, save models:
python train_GLC.py --data_path [path_to_dataset] --gpu_id [gpu_id] --use_mask False --output_dir [path_to_save_models]
  • Training with both HRP and GLC strategy, save models:
python train_GLC.py --data_path [path_to_dataset] --gpu_id [gpu_id] --use_mask True --output_dir [path_to_save_models]

Evaluation

  • Generate the retouched results:
python validation.py --data_path [path_to_dataset] --gpu_id [gpu_id] --model_dir [path_to_models]
  • Use matlab to calculate the measures in our paper:
calculate_metrics(source_dir, target_dir, mask_dir)

Pretrained Models

mv your/path/to/pretrained_models/* saved_models/
  • specify the --model_dir and --epoch (-1) to validate or initialize the training using the pretrained models, e.g.,
python validation.py --data_path [path_to_dataset] --gpu_id [gpu_id] --model_dir mask_noglc_a --epoch -1
python train.py --data_path [path_to_dataset] --gpu_id [gpu_id] --use_mask True --output_dir mask_noglc_a --epoch -1

Citation

If you use this dataset or code for your research, please cite our paper.

@inproceedings{jie2021PPR10K,
  title={PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency},
  author={Liang, Jie and Zeng, Hui and Cui, Miaomiao and Xie, Xuansong and Zhang, Lei},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2021}
}

Related Projects

3D LUT

Contact

Should you have any questions, please contact me via [email protected].

Matching python environment code for Lux AI 2021 Kaggle competition, and a gym interface for RL models.

Lux AI 2021 python game engine and gym This is a replica of the Lux AI 2021 game ported directly over to python. It also sets up a classic Reinforceme

Geoff McDonald 74 Nov 03, 2022
Image augmentation library in Python for machine learning.

Augmentor is an image augmentation library in Python for machine learning. It aims to be a standalone library that is platform and framework independe

Marcus D. Bloice 4.8k Jan 07, 2023
Deploy pytorch classification model using Flask and Streamlit

Deploy pytorch classification model using Flask and Streamlit

Ben Seo 1 Nov 17, 2021
9th place solution

AllDataAreExt-Galixir-Kaggle-HPA-2021-Solution Team Members Qishen Ha is Master of Engineering from the University of Tokyo. Machine Learning Engineer

daishu 5 Nov 18, 2021
Official code repository for the EMNLP 2021 paper

Integrating Visuospatial, Linguistic and Commonsense Structure into Story Visualization PyTorch code for the EMNLP 2021 paper "Integrating Visuospatia

Adyasha Maharana 23 Dec 19, 2022
Source codes for the paper "Local Additivity Based Data Augmentation for Semi-supervised NER"

LADA This repo contains codes for the following paper: Jiaao Chen*, Zhenghui Wang*, Ran Tian, Zichao Yang, Diyi Yang: Local Additivity Based Data Augm

GT-SALT 36 Dec 02, 2022
Unbiased Learning To Rank Algorithms (ULTRA)

This is an Unbiased Learning To Rank Algorithms (ULTRA) toolbox, which provides a codebase for experiments and research on learning to rank with human annotated or noisy labels.

71 Dec 01, 2022
InsTrim: Lightweight Instrumentation for Coverage-guided Fuzzing

InsTrim The paper: InsTrim: Lightweight Instrumentation for Coverage-guided Fuzzing Build Prerequisite llvm-8.0-dev clang-8.0 cmake = 3.2 Make git cl

75 Dec 23, 2022
This code is the implementation of the paper "Coherence-Based Distributed Document Representation Learning for Scientific Documents".

Introduction This code is the implementation of the paper "Coherence-Based Distributed Document Representation Learning for Scientific Documents". If

tsc 0 Jan 11, 2022
This repository contains the code for the binaural-detection model used in the publication arXiv:2111.04637

This repository contains the code for the binaural-detection model used in the publication arXiv:2111.04637 Dependencies The model depends on the foll

Jörg Encke 2 Oct 14, 2022
Self-training with Weak Supervision (NAACL 2021)

This repo holds the code for our weak supervision framework, ASTRA, described in our NAACL 2021 paper: "Self-Training with Weak Supervision"

Microsoft 148 Nov 20, 2022
Learning Representational Invariances for Data-Efficient Action Recognition

Learning Representational Invariances for Data-Efficient Action Recognition Official PyTorch implementation for Learning Representational Invariances

Virginia Tech Vision and Learning Lab 27 Nov 22, 2022
Music Generation using Neural Networks Streamlit App

Music_Gen_Streamlit "Music Generation using Neural Networks" Streamlit App TO DO: Make a run_app.sh Introduction [~5 min] (Sohaib) Team Member names/i

Muhammad Sohaib Arshid 6 Aug 09, 2022
Provide baselines and evaluation metrics of the task: traffic flow prediction

Note: This repo is adpoted from https://github.com/UNIMIBInside/Smart-Mobility-Prediction. Due to technical reasons, I did not fork their code. Introd

Zhangzhi Peng 11 Nov 02, 2022
Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Paper | Blog OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image gene

OFA Sys 1.4k Jan 08, 2023
Aydin is a user-friendly, feature-rich, and fast image denoising tool

Aydin is a user-friendly, feature-rich, and fast image denoising tool that provides a number of self-supervised, auto-tuned, and unsupervised image denoising algorithms.

Royer Lab 99 Dec 14, 2022
Graph InfoClust: Leveraging cluster-level node information for unsupervised graph representation learning

Graph-InfoClust-GIC [PAKDD 2021] PAKDD'21 version Graph InfoClust: Maximizing Coarse-Grain Mutual Information in Graphs Preprint version Graph InfoClu

Costas Mavromatis 21 Dec 03, 2022
This is the pytorch implementation for the paper: *Learning Accurate Performance Predictors for Ultrafast Automated Model Compression*, which is in submission to TPAMI

SeerNet This is the pytorch implementation for the paper: Learning Accurate Performance Predictors for Ultrafast Automated Model Compression, which is

3 May 01, 2022
This repository contains project created during the Data Challenge module at London School of Hygiene & Tropical Medicine

LSHTM_RCS This repository contains project created during the Data Challenge module at London School of Hygiene & Tropical Medicine (LSHTM) in collabo

Lukas Kopecky 3 Jan 30, 2022
Real-time VIBE: Frame by Frame Inference of VIBE (Video Inference for Human Body Pose and Shape Estimation)

Real-time VIBE Inference VIBE frame-by-frame. Overview This is a frame-by-frame inference fork of VIBE at [https://github.com/mkocabas/VIBE]. Usage: i

23 Jul 02, 2022