The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".

Overview

Deep Exemplar-based Video Colorization (Pytorch Implementation)

Paper | Pretrained Model | Youtube video 🔥 | Colab demo

Deep Exemplar-based Video Colorization, CVPR2019

Bo Zhang1,3, Mingming He1,5, Jing Liao2, Pedro V. Sander1, Lu Yuan4, Amine Bermak1, Dong Chen3
1Hong Kong University of Science and Technology,2City University of Hong Kong, 3Microsoft Research Asia, 4Microsoft Cloud&AI, 5USC Institute for Creative Technologies

Prerequisites

  • Python 3.6+
  • Nvidia GPU + CUDA, CuDNN

Installation

First use the following commands to prepare the environment:

conda create -n ColorVid python=3.6
source activate ColorVid
pip install -r requirements.txt

Then, download the pretrained models from this link, unzip the file and place the files into the corresponding folders:

  • video_moredata_l1 under the checkpoints folder
  • vgg19_conv.pth and vgg19_gray.pth under the data folder

Data Preparation

In order to colorize your own video, it requires to extract the video frames, and provide a reference image as an example.

  • Place your video frames into one folder, e.g., ./sample_videos/v32_180
  • Place your reference images into another folder, e.g., ./sample_videos/v32

If you want to automatically retrieve color images, you can try the retrieval algorithm from this link which will retrieve similar images from the ImageNet dataset. Or you can try this link on your own image database.

Test

python test.py --image-size [image-size] \
               --clip_path [path-to-video-frames] \
               --ref_path [path-to-reference] \
               --output_path [path-to-output]

We provide several sample video clips with corresponding references. For example, one can colorize one sample legacy video using:

python test.py --clip_path ./sample_videos/clips/v32 \
               --ref_path ./sample_videos/ref/v32 \
               --output_path ./sample_videos/output

Note that we use 216*384 images for training, which has aspect ratio of 1:2. During inference, we scale the input to this size and then rescale the output back to the original size.

Train

We also provide training code for reference. The training can be started by running:

python --data_root [root of video samples] \
       --data_root_imagenet [root of image samples] \
       --gpu_ids [gpu ids] \

We do not provide the full video dataset due to the copyright issue. For image samples, we retrieve semantically similar images from ImageNet using this repository. Still, one can refer to our code to understand the detailed procedure of augmenting the image dataset to mimic the video frames.

Comparison with State-of-the-Arts

More results

Please check our Youtube demo for results of video colorization.

Citation

If you use this code for your research, please cite our paper.

@inproceedings{zhang2019deep,
title={Deep exemplar-based video colorization},
author={Zhang, Bo and He, Mingming and Liao, Jing and Sander, Pedro V and Yuan, Lu and Bermak, Amine and Chen, Dong},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={8052--8061},
year={2019}
}

Old Photo Restoration 🔥

If you are also interested in restoring the artifacts in the legacy photo, please check our recent work, bringing old photo back to life.

@inproceedings{wan2020bringing,
title={Bringing Old Photos Back to Life},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2747--2757},
year={2020}
}

License

This project is licensed under the MIT license.

Comments
  • I met

    I met "CUDA error: an illegal memory access was encountered" problem

    When i tested your pre-trained model, i met the problem called "CUDA error: an illegal memory access was encountered", can you provide the version of your CUDA, cudnn and pytorch

    opened by Zonobia-A 3
  • Could i apply this method to image colorization

    Could i apply this method to image colorization

    Hi, could i apply this method to image colorization and remove the temporal consistency loss? BTW, how to get the pairs.txt/pairs_mid.txt/pairs_bad.txt used in videoloader_imagenet.py?

    opened by buptlj 2
  • video size very low

    video size very low

    video colorization very good and very impressive . but render image low size 768x432 . and video size also same. how to in increase the image size and video size. thank you..

    opened by srinivas68 2
  • training command is wrong.

    training command is wrong.

    The original training command is python --data_root [root of video samples] \ --data_root_imagenet [root of image samples] \ --gpu_ids [gpu ids] \ Maybe it should be python train.py --data_root [root of video samples] \ --data_root_imagenet [root of image samples] \ --gpu_ids [gpu ids] \ ?

    opened by Horizon2333 1
  • There seems a bug ofr feature centering with x_features - y_features.mean

    There seems a bug ofr feature centering with x_features - y_features.mean

    There seems a bug ofr feature centering with x_features - y_features.mean which I think should be x_features - x_features.mean https://github.com/zhangmozhe/Deep-Exemplar-based-Video-Colorization/blob/37639748f12dfecbb0a3fe265b533887b5fe46ce/models/ContextualLoss.py#L100

    opened by JerryLeolfl 1
  • the test code

    the test code

    Thanks for your great work! I have a question when i run test.py. Why don't you extract the feature of inference image out of the for loop. I haven't found any difference.

    opened by buptlj 1
  • CUDA OOM

    CUDA OOM

    Hello, I am running a 4gb nvidia GPU. Is that enough for inference? I try to run on ubuntu 18.04 as well as windows but always get a Out of memory error eventually. Sometimes happen after 2nd image and sometimes after 5th. This is 1080p video.

    opened by quocthaitang 1
  • illustrative training data

    illustrative training data

    Could you please release a tiny illustrative training dataset, such that the preparation of a custom training data can be easily followed. Currently, it is not easy to prepare a custom training data by reading the train.py. or could you please give a further explanation of the following fields? ( image1_name, image2_name, reference_video_name, reference_video_name1, reference_name1, reference_name2, reference_name3, reference_name4, reference_name5, reference_gt1, reference_gt2, reference_gt3, ) Thank you very much.

    opened by davyfeng 1
  • Runtime error

    Runtime error

    Getting a runtime error when running the test cells at the 'first we visualize the input video'. I'm not good with code, but this is the first time I've experienced this issue with this wonderful program. No cells execute after this error. I've attached a screenshot. IMG_7258

    opened by StevieMaccc 0
  • Test result problem

    Test result problem

    At present, after training, it is found that the generated test image is effective, but the color saturation is very low. Is it because of the colored model or other reasons? I'm looking forward to your reply!!!

    opened by songyn95 0
  • Training has little effect

    Training has little effect

    Hello, I read in the paper that "we train the network for 10 epichs with a batch size of 40 pairs of video frames. " Is it effective after only 10 iterations? Is your data 768 videos, 25 frames per video? I only train one video at present, epoch=40, but I find that it has little effect. What may be the reason?

    opened by songyn95 0
  • Error 404 - Important files missing

    Error 404 - Important files missing

    I was working with the Colab program and there appears to be important models / files missing. As a result the program has ceased to function. I've brough to the designers attention so hopefully will be resolved.

    opened by StevieMaccc 1
  • CUDA device error

    CUDA device error "module 'torch._C' has no attribute '_cuda_setDevice'" when running test.py

    Hi !

    Trying out test.py results in the following error:

    Traceback (most recent call last): File "test.py", line 26, in <module> torch.cuda.set_device(0) File "C:\Users\natha\anaconda3\envs\ColorVid\lib\site-packages\torch\cuda\__init__.py", line 311, in set_device torch._C._cuda_setDevice(device) AttributeError: module 'torch._C' has no attribute '_cuda_setDevice'

    I tried installing pytorch manually using their tool https://pytorch.org/get-started/locally/ (with CUDA 11.6) but that doesn't resolve the issue.

    Can someone help me understand what is going on ? Thanks !!

    opened by FoxTrotte 4
  • Questions about the test phase

    Questions about the test phase

    Thanks for your outstanding work! I have some questions when I read it.

    1. What are the settings when you test this video model on image colorization which used for comparing with other image colorization methods?
    2. Could you please give me a url about your video testset (116 video clips collected from Videvo)? Thanks again for your attention.
    opened by JerryLeolfl 0
  • It seems not correct of the code in TestTransforms.py line 341

    It seems not correct of the code in TestTransforms.py line 341

    https://github.com/zhangmozhe/Deep-Exemplar-based-Video-Colorization/blob/37639748f12dfecbb0a3fe265b533887b5fe46ce/lib/TestTransforms.py#L341 it seems a repeated define of call

    opened by JerryLeolfl 0
  • Wrong output resolution

    Wrong output resolution

    Processing 4x3 video 912x720 outputs cropped and downscaled 16x9 768x432. Playing around "python test.py --image-size [image-size] " doesn't help My be I don't properly specify an arguments? So, what the the proper use of --image-size [image-size] in order to get 912x720? Greatly appreciate for suggesting.

    opened by semel1 5
ConvMAE: Masked Convolution Meets Masked Autoencoders

ConvMAE ConvMAE: Masked Convolution Meets Masked Autoencoders Peng Gao1, Teli Ma1, Hongsheng Li2, Jifeng Dai3, Yu Qiao1, 1 Shanghai AI Laboratory, 2 M

Alpha VL Team of Shanghai AI Lab 345 Jan 08, 2023
Kaggleship: Kaggle Notebooks

Kaggleship: Kaggle Notebooks This repository contains my Kaggle notebooks. They are generally about data science, machine learning, and deep learning.

Erfan Sobhaei 1 Jan 25, 2022
Machine Learning Model deployment for Container (TensorFlow Serving)

try_tf_serving ├───dataset │ ├───testing │ │ ├───paper │ │ ├───rock │ │ └───scissors │ └───training │ ├───paper │ ├───rock

Azhar Rizki Zulma 5 Jan 07, 2022
Robust Self-augmentation for NER with Meta-reweighting

Robust Self-augmentation for NER with Meta-reweighting

Lam chi 17 Nov 22, 2022
Contrastive Learning for Metagenomic Binning

CLMB A simple framework for CLMB - a novel deep Contrastive Learningfor Metagenomic Binning Created by Pengfei Zhang, senior of Department of Computer

1 Sep 14, 2022
A general-purpose encoder-decoder framework for Tensorflow

READ THE DOCUMENTATION CONTRIBUTING A general-purpose encoder-decoder framework for Tensorflow that can be used for Machine Translation, Text Summariz

Google 5.5k Jan 07, 2023
Feup-csr - Repository holding my group's submission to the CSR project competition

CSR Competições de Swarm Robotics Swarm Robotics Competitions This repository holds the files submitted for the CSR project competition. Project group

Nuno Pereira 1 Jan 04, 2022
On Out-of-distribution Detection with Energy-based Models

On Out-of-distribution Detection with Energy-based Models This repository contains the code for the experiments conducted in the paper On Out-of-distr

Sven 19 Aug 07, 2022
Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

Michael Nielsen 13.9k Dec 26, 2022
Python Fanduel API (2021) - Lineup Automation

Southpaw is a python package that provides access to the Fanduel API. Optimize your DFS experience by programmatically updating your lineups, analyzin

Brandin Canfield 13 Jan 04, 2023
A library for differentiable nonlinear optimization.

Theseus A library for differentiable nonlinear optimization built on PyTorch to support constructing various problems in robotics and vision as end-to

Meta Research 1.1k Dec 30, 2022
M3DSSD: Monocular 3D Single Stage Object Detector

M3DSSD: Monocular 3D Single Stage Object Detector Setup pytorch 0.4.1 Preparation Download the full KITTI detection dataset. Then place a softlink (or

mumianyuxin 64 Dec 27, 2022
A tensorflow model that predicts if the image is of a cat or of a dog.

Quick intro Hello and thank you for your interest in my project! This is the backend part of a two-repo application. The other part can be found here

Tudor Matei 0 Mar 08, 2022
FID calculation with proper image resizing and quantization steps

clean-fid: Fixing Inconsistencies in FID Project | Paper The FID calculation involves many steps that can produce inconsistencies in the final metric.

Gaurav Parmar 606 Jan 06, 2023
Intrusion Test Tool with Python

P3ntsT00L Uma ferramenta escrita em Python, feita para Teste de intrusão. Requisitos ter o python 3.9.8 instalado em sua máquina. ter a git instalada

josh washington 2 Dec 27, 2021
Preprossing-loan-data-with-NumPy - In this project, I have cleaned and pre-processed the loan data that belongs to an affiliate bank based in the United States.

Preprossing-loan-data-with-NumPy In this project, I have cleaned and pre-processed the loan data that belongs to an affiliate bank based in the United

Dhawal Chitnavis 2 Jan 03, 2022
The goal of the exercises below is to evaluate the candidate knowledge and problem solving expertise regarding the main development focuses for the iFood ML Platform team: MLOps and Feature Store development.

The goal of the exercises below is to evaluate the candidate knowledge and problem solving expertise regarding the main development focuses for the iFood ML Platform team: MLOps and Feature Store dev

George Rocha 0 Feb 03, 2022
GeoTransformer - Geometric Transformer for Fast and Robust Point Cloud Registration

Geometric Transformer for Fast and Robust Point Cloud Registration PyTorch imple

Zheng Qin 220 Jan 05, 2023
ShuttleNet: Position-aware Fusion of Rally Progress and Player Styles for Stroke Forecasting in Badminton (AAAI 2022)

ShuttleNet: Position-aware Rally Progress and Player Styles Fusion for Stroke Forecasting in Badminton (AAAI 2022) Official code of the paper ShuttleN

Wei-Yao Wang 11 Nov 30, 2022
PyTorch Implementation of CycleGAN and SSGAN for Domain Transfer (Minimal)

MNIST-to-SVHN and SVHN-to-MNIST PyTorch Implementation of CycleGAN and Semi-Supervised GAN for Domain Transfer. Prerequites Python 3.5 PyTorch 0.1.12

Yunjey Choi 401 Dec 30, 2022