Auto White-Balance Correction for Mixed-Illuminant Scenes

Overview

Auto White-Balance Correction for Mixed-Illuminant Scenes

Mahmoud Afifi, Marcus A. Brubaker, and Michael S. Brown

York University   

Video

Reference code for the paper Auto White-Balance Correction for Mixed-Illuminant Scenes. Mahmoud Afifi, Marcus A. Brubaker, and Michael S. Brown. If you use this code or our dataset, please cite our paper:

@inproceedings{afifi2022awb,
  title={Auto White-Balance Correction for Mixed-Illuminant Scenes},
  author={Afifi, Mahmoud and Brubaker, Marcus A. and Brown, Michael S.},
  booktitle={IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2022}
}

teaser

The vast majority of white-balance algorithms assume a single light source illuminates the scene; however, real scenes often have mixed lighting conditions. Our method presents an effective auto white-balance method to deal with such mixed-illuminant scenes. A unique departure from conventional auto white balance, our method does not require illuminant estimation, as is the case in traditional camera auto white-balance modules. Instead, our method proposes to render the captured scene with a small set of predefined white-balance settings. Given this set of small rendered images, our method learns to estimate weighting maps that are used to blend the rendered images to generate the final corrected image.

method

Our method was built on top of the modified camera ISP proposed here. This repo provides the source code of our deep network proposed in our paper.

Code

Training

To start training, you should first download the Rendered WB dataset, which includes ~65K sRGB images rendered with different color temperatures. Each image in this dataset has the corresponding ground-truth sRGB image that was rendered with an accurate white-balance correction. From this dataset, we selected 9,200 training images that were rendered with the "camera standard" photofinishing and with the following white-balance settings: tungsten (or incandescent), fluorescent, daylight, cloudy, and shade. To get this set, you need to only use images ends with the following parts: _T_CS.png, _F_CS.png, _D_CS.png, _C_CS.png, _S_CS.png and their associated ground-truth image (that ends with _G_AS.png).

Copy all training input images to ./data/images and copy all ground truth images to ./data/ground truth images. Note that if you are going to train on a subset of these white-balance settings (e.g., tungsten, daylight, and shade), there is no need to have the additional white-balance settings in your training image directory.

Then, run the following command:

python train.py --wb-settings ... --model-name --patch-size --batch-size --gpu

where, WB SETTING i should be one of the following settings: T, F, D, C, S, which refer to tungsten, fluorescent, daylight, cloudy, and shade, respectively. Note that daylight (D) should be one of the white-balance settings. For instance, to train a model using tungsten and shade white-balance settings + daylight white balance, which is the fixed setting for the high-resolution image (as described in the paper), you can use this command:

python train.py --wb-settings T D S --model-name

Testing

Our pre-trained models are provided in ./models. To test a pre-trained model, use the following command:

python test.py --wb-settings ... --model-name --testing-dir --outdir --gpu

As mentioned in the paper, we apply ensembling and edge-aware smoothing (EAS) to the generated weights. To use ensembling, use --multi-scale True. To use EAS, use --post-process True. Shown below is a qualitative comparison of our results with and without the ensembling and EAS.

weights_ablation

Experimentally, we found that when ensembling is used it is recommended to use an image size of 384, while when it is not used, 128x128 or 256x256 give the best results. To control the size of input images at inference time, use --target-size. For instance, to set the target size to 256, use --target-size 256.

Network

Our network has a GridNet-like architecture. Our network consists of six columns and four rows. As shown in the figure below, our network includes three main units, which are: the residual unit (shown in blue), the downsampling unit (shown in green), and the upsampling unit (shown in yellow). If you are looking for the Pythorch implementation of GridNet, you can check src/gridnet.py.

net

Results

Given this set of rendered images, our method learns to produce weighting maps to generate a blend between these rendered images to generate the final corrected image. Shown below are examples of the produced weighting maps.

weights

Qualitative comparisons of our results with the camera auto white-balance correction. In addition, we show the results of applying post-capture white-balance correction by using the KNN white balance and deep white balance.

qualitative_5k_dataset

Our method has the limitation of requiring a modification to an ISP to render the additional small images with our predefined set of white-balance settings. To process images that have already been rendered by the camera (e.g., JPEG images), we can employ one of the sRGB white-balance editing methods to synthetically generate our small images with the target predefined WB set in post-capture time.

In the shown figure below, we illustrate this idea by employing the deep white-balance editing to generate the small images of a given sRGB camera-rendered image taken from Flickr. As shown, our method produces a better result when comparing to the camera-rendered image (i.e., traditional camera AWB) and the deep WB result for post-capture WB correction. If the input image does not have the associated small images (as described above), the provided source code runs automatically deep white-balance editing for you to get the small images.

qualitative_flickr

Dataset

dataset

We generated a synthetic testing set to quantitatively evaluate white-balance methods on mixed-illuminant scenes. Our test set consists of 150 images with mixed illuminations. The ground-truth of each image is provided by rendering the same scene with a fixed color temperature used for all light sources in the scene and the camera auto white balance. Ground-truth images end with _G_AS.png, while input images ends with _X_CS.png, where X refers to the white-balance setting used to render each image.

You can download our test set from one of the following links:

Acknowledgement

A big thanks to Mohammed Hossam for his help in generating our synthetic test set.

Commercial Use

This software and data are provided for research purposes only and CANNOT be used for commercial purposes.

Related Research Projects

  • C5: A self-calibration method for cross-camera illuminant estimation (ICCV 2021).
  • Deep White-Balance Editing: A multi-task deep learning model for post-capture white-balance correction and editing (CVPR 2020).
  • Interactive White Balancing: A simple method to link the nonlinear white-balance correction to the user's selected colors to allow interactive white-balance manipulation (CIC 2020).
  • White-Balance Augmenter: An augmentation technique based on camera WB errors (ICCV 2019).
  • When Color Constancy Goes Wrong: The first work to directly address the problem of incorrectly white-balanced images; requires a small memory overhead and it is fast (CVPR 2019).
  • Color temperature tuning: A modified camera ISP to allow white-balance editing in post-capture time (CIC 2019).
  • SIIE: A learning-based sensor-independent illumination estimation method (BMVC 2019).
Owner
Mahmoud Afifi
Mahmoud Afifi
A deep learning CNN model to identify and classify and check if a person is wearing a mask or not.

Face Mask Detection The Model is designed to check if any human is wearing a mask or not. Dataset Description The Dataset contains a total of 11,792 i

1 Mar 01, 2022
gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks.

gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks. It is built on top of the OpenAI G

Robin Henry 99 Dec 12, 2022
ROS-UGV-Control-Interface - Control interface which can be used in any UGV

ROS-UGV-Control-Interface Cam Closed: Cam Opened:

Ahmet Fatih Akcan 1 Nov 04, 2022
Testing the Facial Emotion Recognition (FER) algorithm on animations

PegHeads-Tutorial-3 Testing the Facial Emotion Recognition (FER) algorithm on animations

PegHeads Inc 2 Jan 03, 2022
GMFlow: Learning Optical Flow via Global Matching

GMFlow GMFlow: Learning Optical Flow via Global Matching Authors: Haofei Xu, Jing Zhang, Jianfei Cai, Hamid Rezatofighi, Dacheng Tao We streamline the

Haofei Xu 298 Jan 04, 2023
[CVPR 2020] Interpreting the Latent Space of GANs for Semantic Face Editing

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing Figure: High-quality facial attributes editing results with InterFaceGA

GenForce: May Generative Force Be with You 1.3k Dec 29, 2022
SSD-based Object Detection in PyTorch

SSD-based Object Detection in PyTorch 서강대학교 현대모비스 SW 프로그램에서 진행한 인공지능 프로젝트입니다. Jetson nano를 이용해 pre-trained network를 fine tuning시켜 차량 및 신호등 인식을 구현하였습니다

Haneul Kim 1 Nov 16, 2021
Extract MNIST handwritten digits dataset binary file into bmp images

MNIST-dataset-extractor Extract MNIST handwritten digits dataset binary file into bmp images More info at http://yann.lecun.com/exdb/mnist/ Dependenci

Omar Mostafa 6 May 24, 2021
Pytorch library for end-to-end transformer models training and serving

Pytorch library for end-to-end transformer models training and serving

Mikhail Grankin 768 Jan 01, 2023
Official implementation of deep-multi-trajectory-based single object tracking (IEEE T-CSVT 2021).

DeepMTA_PyTorch Officical PyTorch Implementation of "Dynamic Attention-guided Multi-TrajectoryAnalysis for Single Object Tracking", Xiao Wang, Zhe Che

Xiao Wang(王逍) 7 Dec 03, 2022
Efficiently computes derivatives of numpy code.

Note: Autograd is still being maintained but is no longer actively developed. The main developers (Dougal Maclaurin, David Duvenaud, Matt Johnson, and

Formerly: Harvard Intelligent Probabilistic Systems Group -- Now at Princeton 6.1k Jan 08, 2023
This is a pytorch implementation for the BST model from Alibaba https://arxiv.org/pdf/1905.06874.pdf

Behavior-Sequence-Transformer-Pytorch This is a pytorch implementation for the BST model from Alibaba https://arxiv.org/pdf/1905.06874.pdf This model

Jaime Ferrando Huertas 83 Jan 05, 2023
Adaptive Dropblock Enhanced GenerativeAdversarial Networks for Hyperspectral Image Classification

This repo holds the codes of our paper: Adaptive Dropblock Enhanced GenerativeAdversarial Networks for Hyperspectral Image Classification, which is ac

Feng Gao 17 Dec 28, 2022
Makes patches from huge resolution .svs slide files using openslide

openslide_patcher Makes patches from huge resolution .svs slide files using openslide Example collage I made from outputs:

2 Dec 23, 2021
Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance

Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the training distribution.

Ubiquitous Knowledge Processing Lab 22 Jan 02, 2023
VideoGPT: Video Generation using VQ-VAE and Transformers

VideoGPT: Video Generation using VQ-VAE and Transformers [Paper][Website][Colab][Gradio Demo] We present VideoGPT: a conceptually simple architecture

Wilson Yan 470 Dec 30, 2022
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs

Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs ArXiv Abstract Convolutional Neural Networks (CNNs) have become the de f

Philipp Benz 12 Oct 24, 2022
Hyperbolic Procrustes Analysis Using Riemannian Geometry

Hyperbolic Procrustes Analysis Using Riemannian Geometry The code in this repository creates the figures presented in this article: Please notice that

Ronen Talmon's Lab 2 Jan 08, 2023
ColossalAI-Examples - Examples of training models with hybrid parallelism using ColossalAI

ColossalAI-Examples This repository contains examples of training models with Co

HPC-AI Tech 185 Jan 09, 2023
Code base for "On-the-Fly Test-time Adaptation for Medical Image Segmentation"

On-the-Fly Adaptation Official Pytorch Code base for On-the-Fly Test-time Adaptation for Medical Image Segmentation Paper Introduction One major probl

Jeya Maria Jose 17 Nov 10, 2022