Auto White-Balance Correction for Mixed-Illuminant Scenes

Overview

Auto White-Balance Correction for Mixed-Illuminant Scenes

Mahmoud Afifi, Marcus A. Brubaker, and Michael S. Brown

York University   

Video

Reference code for the paper Auto White-Balance Correction for Mixed-Illuminant Scenes. Mahmoud Afifi, Marcus A. Brubaker, and Michael S. Brown. If you use this code or our dataset, please cite our paper:

@inproceedings{afifi2022awb,
  title={Auto White-Balance Correction for Mixed-Illuminant Scenes},
  author={Afifi, Mahmoud and Brubaker, Marcus A. and Brown, Michael S.},
  booktitle={IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2022}
}

teaser

The vast majority of white-balance algorithms assume a single light source illuminates the scene; however, real scenes often have mixed lighting conditions. Our method presents an effective auto white-balance method to deal with such mixed-illuminant scenes. A unique departure from conventional auto white balance, our method does not require illuminant estimation, as is the case in traditional camera auto white-balance modules. Instead, our method proposes to render the captured scene with a small set of predefined white-balance settings. Given this set of small rendered images, our method learns to estimate weighting maps that are used to blend the rendered images to generate the final corrected image.

method

Our method was built on top of the modified camera ISP proposed here. This repo provides the source code of our deep network proposed in our paper.

Code

Training

To start training, you should first download the Rendered WB dataset, which includes ~65K sRGB images rendered with different color temperatures. Each image in this dataset has the corresponding ground-truth sRGB image that was rendered with an accurate white-balance correction. From this dataset, we selected 9,200 training images that were rendered with the "camera standard" photofinishing and with the following white-balance settings: tungsten (or incandescent), fluorescent, daylight, cloudy, and shade. To get this set, you need to only use images ends with the following parts: _T_CS.png, _F_CS.png, _D_CS.png, _C_CS.png, _S_CS.png and their associated ground-truth image (that ends with _G_AS.png).

Copy all training input images to ./data/images and copy all ground truth images to ./data/ground truth images. Note that if you are going to train on a subset of these white-balance settings (e.g., tungsten, daylight, and shade), there is no need to have the additional white-balance settings in your training image directory.

Then, run the following command:

python train.py --wb-settings ... --model-name --patch-size --batch-size --gpu

where, WB SETTING i should be one of the following settings: T, F, D, C, S, which refer to tungsten, fluorescent, daylight, cloudy, and shade, respectively. Note that daylight (D) should be one of the white-balance settings. For instance, to train a model using tungsten and shade white-balance settings + daylight white balance, which is the fixed setting for the high-resolution image (as described in the paper), you can use this command:

python train.py --wb-settings T D S --model-name

Testing

Our pre-trained models are provided in ./models. To test a pre-trained model, use the following command:

python test.py --wb-settings ... --model-name --testing-dir --outdir --gpu

As mentioned in the paper, we apply ensembling and edge-aware smoothing (EAS) to the generated weights. To use ensembling, use --multi-scale True. To use EAS, use --post-process True. Shown below is a qualitative comparison of our results with and without the ensembling and EAS.

weights_ablation

Experimentally, we found that when ensembling is used it is recommended to use an image size of 384, while when it is not used, 128x128 or 256x256 give the best results. To control the size of input images at inference time, use --target-size. For instance, to set the target size to 256, use --target-size 256.

Network

Our network has a GridNet-like architecture. Our network consists of six columns and four rows. As shown in the figure below, our network includes three main units, which are: the residual unit (shown in blue), the downsampling unit (shown in green), and the upsampling unit (shown in yellow). If you are looking for the Pythorch implementation of GridNet, you can check src/gridnet.py.

net

Results

Given this set of rendered images, our method learns to produce weighting maps to generate a blend between these rendered images to generate the final corrected image. Shown below are examples of the produced weighting maps.

weights

Qualitative comparisons of our results with the camera auto white-balance correction. In addition, we show the results of applying post-capture white-balance correction by using the KNN white balance and deep white balance.

qualitative_5k_dataset

Our method has the limitation of requiring a modification to an ISP to render the additional small images with our predefined set of white-balance settings. To process images that have already been rendered by the camera (e.g., JPEG images), we can employ one of the sRGB white-balance editing methods to synthetically generate our small images with the target predefined WB set in post-capture time.

In the shown figure below, we illustrate this idea by employing the deep white-balance editing to generate the small images of a given sRGB camera-rendered image taken from Flickr. As shown, our method produces a better result when comparing to the camera-rendered image (i.e., traditional camera AWB) and the deep WB result for post-capture WB correction. If the input image does not have the associated small images (as described above), the provided source code runs automatically deep white-balance editing for you to get the small images.

qualitative_flickr

Dataset

dataset

We generated a synthetic testing set to quantitatively evaluate white-balance methods on mixed-illuminant scenes. Our test set consists of 150 images with mixed illuminations. The ground-truth of each image is provided by rendering the same scene with a fixed color temperature used for all light sources in the scene and the camera auto white balance. Ground-truth images end with _G_AS.png, while input images ends with _X_CS.png, where X refers to the white-balance setting used to render each image.

You can download our test set from one of the following links:

Acknowledgement

A big thanks to Mohammed Hossam for his help in generating our synthetic test set.

Commercial Use

This software and data are provided for research purposes only and CANNOT be used for commercial purposes.

Related Research Projects

  • C5: A self-calibration method for cross-camera illuminant estimation (ICCV 2021).
  • Deep White-Balance Editing: A multi-task deep learning model for post-capture white-balance correction and editing (CVPR 2020).
  • Interactive White Balancing: A simple method to link the nonlinear white-balance correction to the user's selected colors to allow interactive white-balance manipulation (CIC 2020).
  • White-Balance Augmenter: An augmentation technique based on camera WB errors (ICCV 2019).
  • When Color Constancy Goes Wrong: The first work to directly address the problem of incorrectly white-balanced images; requires a small memory overhead and it is fast (CVPR 2019).
  • Color temperature tuning: A modified camera ISP to allow white-balance editing in post-capture time (CIC 2019).
  • SIIE: A learning-based sensor-independent illumination estimation method (BMVC 2019).
Owner
Mahmoud Afifi
Mahmoud Afifi
Code for EMNLP 2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training"

SCAPT-ABSA Code for EMNLP2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training" Overvie

Zhengyan Li 66 Dec 04, 2022
Repo for "Physion: Evaluating Physical Prediction from Vision in Humans and Machines" submission to NeurIPS 2021 (Datasets & Benchmarks track)

Physion: Evaluating Physical Prediction from Vision in Humans and Machines This repo contains code and data to reproduce the results in our paper, Phy

Cognitive Tools Lab 38 Jan 06, 2023
Neon: an add-on for Lightbulb making it easier to handle component interactions

Neon Neon is an add-on for Lightbulb making it easier to handle component interactions. Installation pip install git+https://github.com/neonjonn/light

Neon Jonn 9 Apr 29, 2022
My personal code and solution to the Synacor Challenge from 2012 OSCON.

Synacor OSCON Challenge Solution (2012) This repository contains my code and solution to solve the Synacor OSCON 2012 Challenge. If you are interested

2 Mar 20, 2022
offical implement of our Lifelong Person Re-Identification via Adaptive Knowledge Accumulation in CVPR2021

LifelongReID Offical implementation of our Lifelong Person Re-Identification via Adaptive Knowledge Accumulation in CVPR2021 by Nan Pu, Wei Chen, Yu L

PeterPu 76 Dec 08, 2022
A two-stage U-Net for high-fidelity denoising of historical recordings

A two-stage U-Net for high-fidelity denoising of historical recordings Official repository of the paper (not submitted yet): E. Moliner and V. Välimäk

Eloi Moliner Juanpere 57 Jan 05, 2023
Utility code for use with PyXLL

pyxll-utils There is no need to use this package as of PyXLL 5. All features from this package are now provided by PyXLL. If you were using this packa

PyXLL 10 Dec 18, 2021
Apply our monocular depth boosting to your own network!

MergeNet - Boost Your Own Depth Boost custom or edited monocular depth maps using MergeNet Input Original result After manual editing of base You can

Computational Photography Lab @ SFU 142 Dec 17, 2022
GAN-STEM-Conv2MultiSlice - Exploring Generative Adversarial Networks for Image-to-Image Translation in STEM Simulation

GAN-STEM-Conv2MultiSlice GAN method to help covert lower resolution STEM images generated by convolution methods to higher resolution STEM images gene

UW-Madison Computational Materials Group 2 Feb 10, 2021
CN24 is a complete semantic segmentation framework using fully convolutional networks

Build status: master (production branch): develop (development branch): Welcome to the CN24 GitHub repository! CN24 is a complete semantic segmentatio

Computer Vision Group Jena 123 Jul 14, 2022
Code for "On Memorization in Probabilistic Deep Generative Models"

On Memorization in Probabilistic Deep Generative Models This repository contains the code necessary to reproduce the experiments in On Memorization in

The Alan Turing Institute 3 Jun 09, 2022
TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors

TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors This package provides a simulator for vision-based

Facebook Research 255 Dec 27, 2022
A code generator from ONNX to PyTorch code

onnx-pytorch Generating pytorch code from ONNX. Currently support onnx==1.9.0 and torch==1.8.1. Installation From PyPI pip install onnx-pytorch From

Wenhao Hu 94 Jan 06, 2023
A hybrid SOTA solution of LiDAR panoptic segmentation with C++ implementations of point cloud clustering algorithms. ICCV21, Workshop on Traditional Computer Vision in the Age of Deep Learning

ICCVW21-TradiCV-Survey-of-LiDAR-Cluster Motivation In contrast to popular end-to-end deep learning LiDAR panoptic segmentation solutions, we propose a

YimingZhao 103 Nov 22, 2022
一套完整的微博舆情分析流程代码,包括微博爬虫、LDA主题分析和情感分析。

已经将项目的关键文件上传,包含微博爬虫、LDA主题分析和情感分析三个部分。 1.微博爬虫 实现微博评论爬取和微博用户信息爬取,一天大概十万条。 2.LDA主题分析 实现文档主题抽取,包括数据清洗及分词、主题数的确定(主题一致性和困惑度)和最优主题模型的选择(暴力搜索)。 3.情感分析 实现评论文本的

182 Jan 02, 2023
Self-describing JSON-RPC services made easy

ReflectRPC Self-describing JSON-RPC services made easy Contents What is ReflectRPC? Installation Features Datatypes Custom Datatypes Returning Errors

Andreas Heck 31 Jul 16, 2022
Jittor 64*64 implementation of StyleGAN

StyleGanJittor (Tsinghua university computer graphics course) Overview Jittor 64

Song Shengyu 3 Jan 20, 2022
DECAF: Deep Extreme Classification with Label Features

DECAF DECAF: Deep Extreme Classification with Label Features @InProceedings{Mittal21, author = "Mittal, A. and Dahiya, K. and Agrawal, S. and Sain

46 Nov 06, 2022
This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"

Fisher Information Loss This repository contains code that can be used to reproduce the experimental results presented in the paper: Awni Hannun, Chua

Facebook Research 43 Dec 30, 2022