PyTorch-Multi-Style-Transfer - Neural Style and MSG-Net

Overview

PyTorch-Style-Transfer

This repo provides PyTorch Implementation of MSG-Net (ours) and Neural Style (Gatys et al. CVPR 2016), which has been included by ModelDepot. We also provide Torch implementation and MXNet implementation.

Tabe of content

MSG-Net

Multi-style Generative Network for Real-time Transfer [arXiv] [project]
Hang Zhang, Kristin Dana
@article{zhang2017multistyle,
	title={Multi-style Generative Network for Real-time Transfer},
	author={Zhang, Hang and Dana, Kristin},
	journal={arXiv preprint arXiv:1703.06953},
	year={2017}
}

Stylize Images Using Pre-trained MSG-Net

  1. Download the pre-trained model
    git clone [email protected]:zhanghang1989/PyTorch-Style-Transfer.git
    cd PyTorch-Style-Transfer/experiments
    bash models/download_model.sh
  2. Camera Demo
    python camera_demo.py demo --model models/21styles.model
  3. Test the model
    python main.py eval --content-image images/content/venice-boat.jpg --style-image images/21styles/candy.jpg --model models/21styles.model --content-size 1024
  • If you don't have a GPU, simply set --cuda=0. For a different style, set --style-image path/to/style. If you would to stylize your own photo, change the --content-image path/to/your/photo. More options:

    • --content-image: path to content image you want to stylize.
    • --style-image: path to style image (typically covered during the training).
    • --model: path to the pre-trained model to be used for stylizing the image.
    • --output-image: path for saving the output image.
    • --content-size: the content image size to test on.
    • --cuda: set it to 1 for running on GPU, 0 for CPU.

Train Your Own MSG-Net Model

  1. Download the COCO dataset
    bash dataset/download_dataset.sh
  2. Train the model
    python main.py train --epochs 4
  • If you would like to customize styles, set --style-folder path/to/your/styles. More options:
    • --style-folder: path to the folder style images.
    • --vgg-model-dir: path to folder where the vgg model will be downloaded.
    • --save-model-dir: path to folder where trained model will be saved.
    • --cuda: set it to 1 for running on GPU, 0 for CPU.

Neural Style

Image Style Transfer Using Convolutional Neural Networks by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge.

python main.py optim --content-image images/content/venice-boat.jpg --style-image images/21styles/candy.jpg
  • --content-image: path to content image.
  • --style-image: path to style image.
  • --output-image: path for saving the output image.
  • --content-size: the content image size to test on.
  • --style-size: the style image size to test on.
  • --cuda: set it to 1 for running on GPU, 0 for CPU.

Acknowledgement

The code benefits from outstanding prior work and their implementations including:

Comments
  • training new model

    training new model

    @zhanghang1989 I trained a model with three style images. Now, I see eight .model files. Can you please tell me which .model file to use OR how to integrate them to single model file.

    Thanks Akash

    opened by akashdexati 7
  • Unable to resume training

    Unable to resume training

    Hey,

    So I started training a model, but seeing how long it was going to take I wanted to double check I could successfully resume training.

    I ran: python3 main.py train --epochs 4 --style-folder images/xmas-styles/ --save-model-dir trained_models/ until it generated the first checkpoint, then I ran python3 main.py train --epochs 4 --style-folder images/xmas-styles/ --save-model-dir trained_models/ --resume trained_models/Epoch_0iters_8000_Sat_Dec__9_18\:10\:43_2017_1.0_5.0.model and waiting for the first feedback report, which was Sat Dec 9 18:17:09 2017 Epoch 1: [2000/123287] content: 254020.831359 style: 1666218.549250 total: 1920239.380609 so it appeared to not have resumed at all.

    Also slight side question... Say I train with --epochs 4 til I get final model... If I were to use the last checkpoint before final to resume, but set --epochs 5 or higher, would that work correctly and just keep going through to 5 epochs before generating another final, and have no issues etc?

    opened by pingu2k4 6
  • Temporal coherence?

    Temporal coherence?

    Have you tried some technique for temporal coherence? If not, would you mind if I ask which one would you recommend or would like to try.

    Keep up the good work.

    opened by rraallvv 3
  • vgg16.t7 unhashable type: 'numpy.ndarray'

    vgg16.t7 unhashable type: 'numpy.ndarray'

    It's been a while since the last vgg16 issue i found on this "Issues".

    So i download the vgg16.t7 from the paper quoted in this github. And i run this command "python main.py train --epochs 4 --style-folder images/ownstyles --save-model-dir own_models --cuda 1" i have put the vgg16.t7 into models folder, it's been detected correctly. However, the following problem happened.

    Traceback (most recent call last):
      File "main.py", line 295, in <module>
        main()
      File "main.py", line 41, in main
        train(args)
      File "main.py", line 135, in train
        utils.init_vgg16(args.vgg_model_dir)
      File "C:\Users\user\Prepwork\Cap Project\PyTorch-Multi-Style-Transfer\experiments\utils.py", line 100, in init_vgg16
        vgglua = load_lua(os.path.join(model_folder, 'vgg16.t7'))
      File "C:\Users\user\anaconda3\envs\FTDS\lib\site-packages\torchfile.py", line 424, in load
        return reader.read_obj()
      File "C:\Users\user\anaconda3\envs\FTDS\lib\site-packages\torchfile.py", line 370, in read_obj
        obj._obj = self.read_obj()
      File "C:\Users\user\anaconda3\envs\FTDS\lib\site-packages\torchfile.py", line 385, in read_obj
        k = self.read_obj()
      File "C:\Users\user\anaconda3\envs\FTDS\lib\site-packages\torchfile.py", line 386, in read_obj
        v = self.read_obj()
      File "C:\Users\user\anaconda3\envs\FTDS\lib\site-packages\torchfile.py", line 370, in read_obj
        obj._obj = self.read_obj()
      File "C:\Users\user\anaconda3\envs\FTDS\lib\site-packages\torchfile.py", line 387, in read_obj
        obj[k] = v
    TypeError: unhashable type: 'numpy.ndarray'
    

    Is there anyway i can fix this? i found in other thread they said replace with another one, but i could not find another one other than from stanford.

    Thanks!

    opened by fuddyduddy 2
  • Fix colab notebook

    Fix colab notebook

    Hi. Made some changes to notebook:

    • fixed RuntimeError #21, #32, that was fixed in #31 and #37, but not for msgnet.ipynb;
    • removed unused import torch.nn.functional;
    • prettified according to pep8;
    • changed os.system('wget ...') to direct calling !wget ... without importing os module.

    Tested in colab (run all), the notebook works as expected without errors.

    opened by amrzv 1
  • Establish Docker directory

    Establish Docker directory

    What: Establishes a Docker directory with Dockerfile and run script

    Why: The original repo was written for an outdated version of PyTorch, which makes it hard to run on modern systems without conflicting with updated versions of the dependencies.

    Build the container with

    cd Docker
    docker build -t style-transfer .
    
    opened by ss32 1
  • Fix compatibility issues with torch==1.1.0

    Fix compatibility issues with torch==1.1.0

    RuntimeError: Error(s) in loading state_dict for Net:
    	Unexpected running stats buffer(s) "model1.1.running_mean" and "model1.1.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.
    
    opened by jianchao-li 1
  • set default values

    set default values

    Hi,

    I try run the camera.py with the arguments discribed in the docs , but fail because inside the code dont have values for args.demo_size and img.copy too Whats the default values for set these variables?

    Thank you

    opened by gledsoul 1
  • Super Slow at optim on linux Mint

    Super Slow at optim on linux Mint

    Have this on a fresh install of linux Mint. I'm running the example, 'python main.py optim --content-image images/content/venice-boat.jpg --style-image images/21styles/candy.jpg' and its taking FOREVER to do anything. I used to have it working at a decent speed on Ubuntu on the same hardware.

    When inspecting GPU and CPU usage, I see it start off with minimal GPU usage, and huge CPU usage. it slowly increases GPU usage over time until it has enough and then completes the rest in around the same time as before. As an example, it takes around 8 minutes to figure out that there isn't enough VRAM for the selected image size, whereas previously on my Ubuntu installation that would take a matter of seconds. Any idea why it would take so much longer on Mint? And what I can do to remedy this?

    opened by pingu2k4 1
  • "TypeError: 'torch.FloatTensor' object is not callable" running demo on CPU

    Sorry if I'm missing something, I'm unfamiliar with PyTorch. I'm running the demo on CPU on a Mac and getting the following error:

      File "camera_demo.py", line 93, in <module>
        main()
      File "camera_demo.py", line 90, in main
        run_demo(args, mirror=True)
      File "camera_demo.py", line 60, in run_demo
        simg = style_v.data().numpy()
    TypeError: 'torch.FloatTensor' object is not callable
    

    Thanks.

    opened by Carmezim 1
  • optim with normal RAM?

    optim with normal RAM?

    Hi,

    So I spent around 24 hours so far training a model on my style images, got the results out by using the model on eval and so far they're not great. When I use the optim function with the styles however the results are pretty decent, however I am limited by my VRAM which is 6GB as to what size images I can output. Having a lot more RAM available, I was hoping I could do pretty decently sized images, but it seems that I can only get much larger images with eval. Does eval use normal RAM instead of VRAM?

    I will continue training my model so that I can use eval in the future, whether I can do larger images with optim or not, but no idea how much more training is required to make it anywhere near a respectable result.

    What sort of overall loss value should I be aiming for? Does the number of style images make a difference to what I should expect?

    opened by pingu2k4 1
  • Error Training TypeError: 'NoneType' object is not callable

    Error Training TypeError: 'NoneType' object is not callable

    I was able to get my environment setup successfully to run eval; however, now, trying train I'm running into an issue. Not sure if it's a syntax issues or if something else is going on? You help is greatly appreciated.

    
    #!/bin/bash
    #SBATCH --job-name=train-pytorch
    #SBATCH --mail-type=END,FAIL
    #SBATCH [email protected]
    #SBATCH --ntasks=1
    #SBATCH --time=00:10:00
    #SBATCH --mem=8000
    #SBATCH --gres=gpu:p100:2
    #SBATCH --cpus-per-task=6
    #SBATCH --output=%x_%j.log
    #SBATCH --error=%x_%j.err
    
    source ~/scratch/moldach/PyTorch-Style-Transfer/experiments/venv/bin/activate
    
    python main.py train \
      --epochs 4 \
      --style-folder /scratch/moldach/PyTorch-Style-Transfer/experiments/images/9styles \
      --vgg-model-dir vgg-model/ \
      --save-model-dir checkpoint/
    
    
    /scratch/moldach/first-order-model/venv/lib/python3.6/site-packages/torchvision/transforms/transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
      "please use transforms.Resize instead.")
    Traceback (most recent call last):
      File "main.py", line 295, in <module>
        main()
      File "main.py", line 41, in main
        train(args)
      File "main.py", line 135, in train
        utils.init_vgg16(args.vgg_model_dir)
      File "/scratch/moldach/PyTorch-Style-Transfer/experiments/utils.py", line 102, in init_vgg16
        for (src, dst) in zip(vgglua.parameters()[0], vgg.parameters()):
    TypeError: 'NoneType' object is not callable
    
    

    pip freeze:

    $ pip freeze
    -f /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/nix/avx2
    -f /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/nix/generic
    -f /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic
    cffi==1.11.5
    cloudpickle==0.5.3
    cycler==0.10.0
    dask==0.18.2
    dataclasses==0.8
    decorator==4.4.2
    future==0.18.2
    imageio==2.9.0
    imageio-ffmpeg==0.4.3
    kiwisolver==1.3.1
    matplotlib==3.3.4
    networkx==2.5
    numpy==1.19.1
    pandas==0.23.4
    Pillow==8.1.2
    pycparser==2.18
    pygit==0.1
    pyparsing==2.4.7
    python-dateutil==2.8.1
    pytz==2018.5
    PyWavelets==1.1.1
    PyYAML==5.1
    scikit-image==0.17.2
    scikit-learn==0.19.2
    scipy==1.4.1
    six==1.15.0
    tifffile==2020.9.3
    toolz==0.9.0
    torch==1.7.0
    torchfile==0.1.0
    torchvision==0.2.1
    tqdm==4.24.0
    typing-extensions==3.7.4.3
    
    opened by moldach 4
  • Color produced by eval doesn't match demo

    Color produced by eval doesn't match demo

    Hi ! Thanks for sharing the code. I've ran the eval program using the defaults provided and I noticed the color tends to be much dimmer than what is shown on the homepage here. Is there something that I am missing? The command I used was

    python main.py --style-image ./images/21styles/udnie.jpg --content-image ./images/content/venice-boat.jpg

    out

    opened by clarng 1
  • struct.error: unpack requires a buffer of 4 bytes

    struct.error: unpack requires a buffer of 4 bytes

    Dear author, Thank you so much for sharing a useful code. I able to run your evaluation code, but face the following error during runing of training code: File "main.py", line 41, in main train(args) File "main.py", line 135, in train utils.init_vgg16(args.vgg_model_dir) File "/home2/st118370/models/PyTorch-Multi-Style-Transfer/experiments/utils.py", line 100, in init_vgg16 vgglua = load_lua(os.path.join(model_folder, 'vgg16.t7')) File "/home2/st118370/anaconda3/envs/pytorch-py3/lib/python3.7/site-packages/torchfile.py", line 424, in load return reader.read_obj() File "/home2/st118370/anaconda3/envs/pytorch-py3/lib/python3.7/site-packages/torchfile.py", line 310, in read_obj typeidx = self.read_int() File "/home2/st118370/anaconda3/envs/pytorch-py3/lib/python3.7/site-packages/torchfile.py", line 277, in read_int return self._read('i')[0] File "/home2/st118370/anaconda3/envs/pytorch-py3/lib/python3.7/site-packages/torchfile.py", line 271, in _read return struct.unpack(fmt, self.f.read(sz)) struct.error: unpack requires a buffer of 4 bytes

    how can i resolve this problem? kindly guide. thanks

    opened by MFarooqAit 1
  • vgg16.t7  unhashable type: 'numpy.ndarray

    vgg16.t7 unhashable type: 'numpy.ndarray

    hi

    I have put the vgg16.t7 into models folder, it's been detected correctly. However, the following problem happened.

    Traceback (most recent call last): File "main.py", line 295, in main() File "main.py", line 41, in main train(args) File "main.py", line 135, in train utils.init_vgg16(args.vgg_model_dir) File "C:\Users\user\Prepwork\Cap Project\PyTorch-Multi-Style-Transfer\experiments\utils.py", line 100, in init_vgg16 vgglua = load_lua(os.path.join(model_folder, 'vgg16.t7')) File "C:\Users\user\anaconda3\envs\FTDS\lib\site-packages\torchfile.py", line 424, in load return reader.read_obj() File "C:\Users\user\anaconda3\envs\FTDS\lib\site-packages\torchfile.py", line 370, in read_obj obj._obj = self.read_obj() File "C:\Users\user\anaconda3\envs\FTDS\lib\site-packages\torchfile.py", line 385, in read_obj k = self.read_obj() File "C:\Users\user\anaconda3\envs\FTDS\lib\site-packages\torchfile.py", line 386, in read_obj v = self.read_obj() File "C:\Users\user\anaconda3\envs\FTDS\lib\site-packages\torchfile.py", line 370, in read_obj obj._obj = self.read_obj() File "C:\Users\user\anaconda3\envs\FTDS\lib\site-packages\torchfile.py", line 387, in read_obj obj[k] = v TypeError: unhashable type: 'numpy.ndarray'

    It does't work for pytorch-1.0.0 and 1.4.0, and giving the same error, how to deal with it? thanks !

    opened by Gavin-Evans 13
  • Different brush stroke size

    Different brush stroke size

    In your paper you wrote about the ability to train the model with different sizes of the style images to later get control over the brush stroke size. Did you implement this in either the pytorch or torch implementation? Greetings and keep up the great work

    opened by lpiribauer 0
Releases(v0.1)
Code for Transformers Solve Limited Receptive Field for Monocular Depth Prediction

Official PyTorch code for Transformers Solve Limited Receptive Field for Monocular Depth Prediction. Guanglei Yang, Hao Tang, Mingli Ding, Nicu Sebe,

stanley 152 Dec 16, 2022
A PyTorch Lightning solution to training OpenAI's CLIP from scratch.

train-CLIP 📎 A PyTorch Lightning solution to training CLIP from scratch. Goal ⚽ Our aim is to create an easy to use Lightning implementation of OpenA

Cade Gordon 396 Dec 30, 2022
Gender Classification Machine Learning Model using Sk-learn in Python with 97%+ accuracy and deployment

Gender-classification This is a ML model to classify Male and Females using some physical characterstics Data. Python Libraries like Pandas,Numpy and

Aryan raj 11 Oct 16, 2022
Light-SERNet: A lightweight fully convolutional neural network for speech emotion recognition

Light-SERNet This is the Tensorflow 2.x implementation of our paper "Light-SERNet: A lightweight fully convolutional neural network for speech emotion

Arya Aftab 29 Nov 12, 2022
RAANet: Range-Aware Attention Network for LiDAR-based 3D Object Detection with Auxiliary Density Level Estimation

RAANet: Range-Aware Attention Network for LiDAR-based 3D Object Detection with Auxiliary Density Level Estimation Anonymous submission Abstract 3D obj

30 Sep 16, 2022
Este conversor criará a medida exata para sua receita de capuccino gelado da grandiosa Rafaella Ballerini!

ConversorDeMedidas_CapuccinoGelado Este conversor criará a medida exata para sua receita de capuccino gelado da grandiosa Rafaella Ballerini! Requirem

Arthur Ottoni Ribeiro 48 Nov 15, 2022
Mosaic of Object-centric Images as Scene-centric Images (MosaicOS) for long-tailed object detection and instance segmentation.

MosaicOS Mosaic of Object-centric Images as Scene-centric Images (MosaicOS) for long-tailed object detection and instance segmentation. Introduction M

Cheng Zhang 27 Oct 12, 2022
Alphabetical Letter Recognition

DecisionTrees-Image-Classification Alphabetical Letter Recognition In these demo we are using "Decision Trees" Our database is composed by Learning Im

Mohammed Firass 4 Nov 30, 2021
Unofficial Pytorch Implementation of WaveGrad2

WaveGrad 2 — Unofficial PyTorch Implementation WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis Unofficial PyTorch+Lightning Implementati

MINDs Lab 104 Nov 29, 2022
Technical Analysis library in pandas for backtesting algotrading and quantitative analysis

bta-lib - A pandas based Technical Analysis Library bta-lib is pandas based technical analysis library and part of the backtrader family. Links Main P

DRo 393 Dec 20, 2022
Parasite: a tool allowing you to compress and decompress files, to reduce their size

🦠 Parasite 🦠 Parasite is a tool written in Python3 allowing you to "compress" any file, reducing its size. ⭐ Features ⭐ + Fast + Good optimization,

Billy 30 Nov 25, 2022
A simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.

This is a simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.

crispengari 3 Jan 08, 2022
Learning Super-Features for Image Retrieval

Learning Super-Features for Image Retrieval This repository contains the code for running our FIRe model presented in our ICLR'22 paper: @inproceeding

NAVER 101 Dec 28, 2022
Implementation of Pooling by Sliced-Wasserstein Embedding (NeurIPS 2021)

PSWE: Pooling by Sliced-Wasserstein Embedding (NeurIPS 2021) PSWE is a permutation-invariant feature aggregation/pooling method based on sliced-Wasser

Navid Naderializadeh 3 May 06, 2022
Group-Free 3D Object Detection via Transformers

Group-Free 3D Object Detection via Transformers By Ze Liu, Zheng Zhang, Yue Cao, Han Hu, Xin Tong. This repo is the official implementation of "Group-

Ze Liu 213 Dec 07, 2022
Contains code for Deep Kernelized Dense Geometric Matching

DKM - Deep Kernelized Dense Geometric Matching Contains code for Deep Kernelized Dense Geometric Matching We provide pretrained models and code for ev

Johan Edstedt 83 Dec 23, 2022
SwinIR: Image Restoration Using Swin Transformer

SwinIR: Image Restoration Using Swin Transformer This repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Win

Jingyun Liang 2.4k Jan 08, 2023
Video Contrastive Learning with Global Context

Video Contrastive Learning with Global Context (VCLR) This is the official PyTorch implementation of our VCLR paper. Install dependencies environments

143 Dec 26, 2022
Sign Language is detected in realtime using video sequences. Our approach involves MediaPipe Holistic for keypoints extraction and LSTM Model for prediction.

RealTime Sign Language Detection using Action Recognition Approach Real-Time Sign Language is commonly predicted using models whose architecture consi

Rishikesh S 15 Aug 20, 2022
code for `Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation`

Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation (CVPR 2021) Introduction PBR is a conceptually simple yet effective

H.Chen 143 Jan 05, 2023