An implementation of the paper "A Neural Algorithm of Artistic Style"

Overview

A Neural Algorithm of Artistic Style implementation - Neural Style Transfer

This is an implementation of the research paper "A Neural Algorithm of Artistic Style" written by Leon A. Gatys, Alexander S. Ecker, Matthias Bethge.

Inspiration

The mechanism acting behind perceiving artistic images through biological vision is still unclear among scientists across the world. There exists no proper artificial system that perfectly interprets our visual experiences while understanding art. The method proposed in this paper is a significant step towards explaining how the biological vision might work while perceiving fine art.


Introduction

To quote authors Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, "in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.

The idea of Neural Style Transfer is taking a white noise as an input image, changing the input in such a way that it resembles the content of the content image and the texture/artistic style of the style image to reproduce it as a new artistic stylized image.

We define two distances, one for the content that measures how different the content between the two images is, and one for style that measures how different the style between the two images is. The aim is to transform the white noise input such that the the content-distance and style-distance is minimized (with the content and style image respectively).

Given below are some results from the original implementation


Model Componenets

Our Model architecture follows:

  • We have one module defining two classes responsible for calculating the loss functions for both content and style images and one for applying normalization on the desired values.
  • We have a second module which has three methods under one class NST -
    • A method for image preprocessing.
    • Content and Style Model Representation - We used the feature space provided by the 16 convolutional and 5 pooling layers of the VGG-19 Network. The five style reconstructions were generated by matching the style representations on layer 'conv1_1', 'conv2_1', 'conv3_1', 'conv4_1' and 'conv5_1. The generated style was matched with the content representation on layer 'conv4_2' to transform our input white noise into an image that applied the artistic style from the style image to the content of the content image by minimizing the values for both content and style loss respectively.
    • A method for training - We made a third method that calls the above methods to take content and style inputs from the user, preprocesses it and runs the neural style transfer algorithm on a white noise input image for 300 iterations using the LBFGS as the optimization function to output the generated image that is a combination of the given content and style images.


Implementation Details

  • PIL images have values between 0 and 255, but when transformed into torch tensors, their values are converted to be between 0 and 1. The images need to be resized to have the same dimensions. Neural networks from the torch library are trained with tensor values ranging from 0 to 1. The image_loader() function takes content and style image paths and loads them, creates a white noise input image, and returns the three tensors.
  • The style_model_and_losses() function is responsible for calculating and returning the content and style losses, and adding the content loss and style loss layers immediately after the convolution layer they are detecting.
  • To quote the authors, "To generate the images that mix the content of a photograph with the style of a painting we jointly minimise the distance of a white noise image from the content representation of the photograph in one layer of the network and the style representation of the painting in a number of layers of the CNN". The run_nst() function performs the neural transfer. For each iteration of the networks, an updated input is fed into it and new losses are computed. The backward methods of each loss module is run to dynamicaly compute their gradients. The optimizer requires a “closure()” function, to re-evaluate the module and return the loss.

Note - Owing to computational power limitations, the content and style images are resized to 512x512 when using a GPU or 128x128 when on a CPU. It is advisable to use a GPU for training because Neural Atyle Transfer is computationally very expensive.

Usage Guidelines

  • Cloning the Repository:

      git clone https://github.com/srijarkoroy/ArtiStyle
    
  • Entering the directory:

      cd ArtiStyle
    
  • Setting up the Python Environment with dependencies:

      pip install -r requirements.txt
    
  • Running the file:

      python3 test.py
    

Note: Before running the test file please ensure that you mention a valid path to a content and style image and also set path='path to save the output image' if you want to save your image

Check out the demo notebook here.

Results from implementation

Content Image Style Image Output Image

Contributors

Owner
Srijarko Roy
AI Enthusiast!
Srijarko Roy
Demo for Real-time RGBD-based Extended Body Pose Estimation paper

Real-time RGBD-based Extended Body Pose Estimation This repository is a real-time demo for our paper that was published at WACV 2021 conference The ou

Renat Bashirov 118 Dec 26, 2022
You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks.

AllSet This is the repo for our paper: You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks. We prepared all codes and a subse

Jianhao 51 Dec 24, 2022
ICLR 2021 i-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning

Introduction PyTorch code for the ICLR 2021 paper [i-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning]. @inproceedings{lee2021i

Kibok Lee 68 Nov 27, 2022
Functional deep learning

Pipeline abstractions for deep learning. Full documentation here: https://lf1-io.github.io/padl/ PADL: is a pipeline builder for PyTorch. may be used

LF1 101 Nov 09, 2022
Accurate identification of bacteriophages from metagenomic data using Transformer

PhaMer is a python library for identifying bacteriophages from metagenomic data. PhaMer is based on a Transorfer model and rely on protein-based vocab

Kenneth Shang 9 Nov 30, 2022
Who calls the shots? Rethinking Few-Shot Learning for Audio (WASPAA 2021)

rethink-audio-fsl This repo contains the source code for the paper "Who calls the shots? Rethinking Few-Shot Learning for Audio." (WASPAA 2021) Table

Yu Wang 34 Dec 24, 2022
TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning

TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning Authors: Yixuan Su, Fangyu Liu, Zaiqiao Meng, Lei Shu, Ehsan Shareghi, and Nig

Yixuan Su 79 Nov 04, 2022
Defending against Model Stealing via Verifying Embedded External Features

Defending against Model Stealing Attacks via Verifying Embedded External Features This is the official implementation of our paper Defending against M

20 Dec 30, 2022
DiAne is a smart fuzzer for IoT devices

Diane Diane is a fuzzer for IoT devices. Diane works by identifying fuzzing triggers in the IoT companion apps to produce valid yet under-constrained

seclab 28 Jan 04, 2023
本步态识别系统主要基于GaitSet模型进行实现

本步态识别系统主要基于GaitSet模型进行实现。在尝试部署本系统之前,建立理解GaitSet模型的网络结构、训练和推理方法。 系统的实现效果如视频所示: 演示视频 由于模型较大,部分模型文件存储在百度云盘。 链接提取码:33mb 具体部署过程 1.下载代码 2.安装requirements.txt

16 Oct 22, 2022
Official Repository for the ICCV 2021 paper "PixelSynth: Generating a 3D-Consistent Experience from a Single Image"

PixelSynth: Generating a 3D-Consistent Experience from a Single Image (ICCV 2021) Chris Rockwell, David F. Fouhey, and Justin Johnson [Project Website

Chris Rockwell 95 Nov 22, 2022
Multitask Learning Strengthens Adversarial Robustness

Multitask Learning Strengthens Adversarial Robustness

Columbia University 15 Jun 10, 2022
competitions-v2

Codabench (formerly Codalab Competitions v2) Installation $ cp .env_sample .env $ docker-compose up -d $ docker-compose exec django ./manage.py migrat

CodaLab 21 Dec 02, 2022
Official implementation of Long-Short Transformer in PyTorch.

Long-Short Transformer (Transformer-LS) This repository hosts the code and models for the paper: Long-Short Transformer: Efficient Transformers for La

NVIDIA Corporation 198 Dec 29, 2022
Implementation of Graph Convolutional Networks in TensorFlow

Graph Convolutional Networks This is a TensorFlow implementation of Graph Convolutional Networks for the task of (semi-supervised) classification of n

Thomas Kipf 6.6k Dec 30, 2022
A list of all named GANs!

The GAN Zoo Every week, new GAN papers are coming out and it's hard to keep track of them all, not to mention the incredibly creative ways in which re

Avinash Hindupur 12.9k Jan 08, 2023
The `rtdl` library + The official implementation of the paper

The `rtdl` library + The official implementation of the paper "Revisiting Deep Learning Models for Tabular Data"

Yandex Research 510 Dec 30, 2022
RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation

RIFE RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation Ported from https://github.com/hzwer/arXiv2020-RIFE Dependencies NumPy

49 Jan 07, 2023
Code for "Unsupervised State Representation Learning in Atari"

Unsupervised State Representation Learning in Atari Ankesh Anand*, Evan Racah*, Sherjil Ozair*, Yoshua Bengio, Marc-Alexandre Côté, R Devon Hjelm This

Mila 217 Jan 03, 2023
A simple rest api serving a deep learning model that classifies human gender based on their faces. (vgg16 transfare learning)

this is a simple rest api serving a deep learning model that classifies human gender based on their faces. (vgg16 transfare learning)

crispengari 5 Dec 09, 2021