This program is to make a video based on Deep Dream

Related tags

VideoDeepDreamAnimV2
Overview

DeepDreamAnimV2

This program is to make a video based on Deep Dream. The program is modified from DeepDreamAnim and DeepDreamVideo with additional functions for bleding two frames based on the optical flows. It also supports the image division to apply the Deep Dream algorithm to a large image.

A sample video created by this script from a 4K-resolution, panoramic video. This sample video is cropped by a view from a head-mounted display.

Use Optical Flow to Adjust Deep Dream Video

The flow option enables the optical flow mode. This allows the optical flow of each frame to be calculated by comparing the difference in the movement of all pixels between the current and previous frame. The hallucinatory patterns on the area where the optical flow was detected is merged with the current (not-yet-hallucinatory) frame based on the weighting provided by the user defined blending ratio (0 = no information, 1 = all information). The blending ratio allows some of the hallucinatory content of the previous frame to be inherited from the previous frame, The Deep Dream algorithm is then applied to this merged frame, instead of Deep Dream starting from scratch for each frame.

  1. The difference in optical flow is calculated between the previous and current frame (before the Deep Dream algorithm is applied).
  2. The hallucinatory patterns within the area of high optical flow in the previous frame are shifted in the direction of the optical flow.
  3. This hallucinatory pattern from the previous frame is merged into the current (not-yet-hallucinatory) frame with a specfied blending ratio.

The blending ratio on the optical flow area and the other areas (background) can be specified separately by using -bm and -bs options. The range of the blending ratio is between 0 and 1. A blending ratio of 1 means that the current frame inherits 100% of the hallucinatory content from the previous frame, and then the deep dream algorithm is applied. A blending ratio of 0 means the previous frame is dicarded, therefore the deep dream algorithm is applied from the scrach.

Dividing the Image

When using a GPU to process the deep dream algorithm, the maximum image size is capped by the video memory of your GPU. In order to process large images such as 4K resolution, it is neccersary to divide the input image into smaller sub-images inorder to apply Deep Dream. -d or --divide option can be used to enable this function.

-d 0 : disable image division

-d 1 : divide the image by MAX_WIDTH and MAX_HEIGHT of image specified by -mw and -mh options

-d 2 : divide the image half when the width of the image is larger than MAX_WIDTH specified by -mw option

The divided image is processed by Deep Dream individually and then merged into one. If using octavescale in the original deep dream algorithm, the image is processed in a smaller size in the begining. In that case, the image division will not be applied to the earlier stage of the itterations (before enlarging the original size).

Usage

usage: dreamer.py [-h] -i INPUT -o OUTPUT -it IMAGE_TYPE [--gpu GPU]
                       [-t MODEL_PATH] [-m MODEL_NAME] [-p PREVIEW]
                       [-oct OCTAVES] [-octs OCTAVESCALE] [-itr ITERATIONS]
                       [-j JITTER] [-z ZOOM] [-s STEPSIZE]
                       [-l LAYERS [LAYERS ...]] [-v VERBOSE]
                       [-g GUIDE_IMAGE] [-flow FLOW] [-flowthresh FLOWTHREASH]
                       [-bm BLEND_FLOW] [-bs BLEND_STATIC] [-d DEVIDE_MODE]
                       [-mw MAX_WIDTH] [-mh MAX_HEIGHT]

Original Arguments:

  -h, --help            show this help message and exit
  -i INPUT, --input INPUT
                        Input directory where extracted frames are stored
  -o OUTPUT, --output OUTPUT
                        Output directory where processed frames are to be
                        stored
  -it IMAGE_TYPE, --image_type IMAGE_TYPE
                        Specify whether jpg or png
  --gpu GPU             Switch for gpu computation.
  -t MODEL_PATH, --model_path MODEL_PATH
                        Model directory to use
  -m MODEL_NAME, --model_name MODEL_NAME
                        Caffe Model name to use
  -p PREVIEW, --preview PREVIEW
                        Preview image width. Default: 0
  -oct OCTAVES, --octaves OCTAVES
                        Octaves. Default: 4
  -octs OCTAVESCALE, --octavescale OCTAVESCALE
                        Octave Scale. Default: 1.4
  -itr ITERATIONS, --iterations ITERATIONS
                        Iterations. Default: 10
  -j JITTER, --jitter JITTER
                        Jitter. Default: 32
  -z ZOOM, --zoom ZOOM  Zoom in Amount. Default: 1
  -s STEPSIZE, --stepsize STEPSIZE
                        Step Size. Default: 1.5
  -l LAYERS [LAYERS ...], --layers LAYERS [LAYERS ...]
                        Array of Layers to loop through. Default: [customloop]
                        - or choose ie [inception_4c/output] for that single
                        layer
  -v VERBOSE, --verbose VERBOSE
                        verbosity [0-3]
  -g, --guide GUIDE_IMAGE
			A guide image

Additional Arguments:

   -flow, --flow	Optical Flow is taken into accout
   -flowthresh FLOWTHREASH, --flowthresh FLOWTHREASH
			Threshold for detecting a flow
   -bm BLEND_FLOW, --blendflow BLEND_FLOW
			blend ratio for flowing part
   -bs BLEND_STATIC, --blendstatic BLEND_STATIC
			blend ratio for static part

   -d [0-2] --divide [0-2]
			dividing image into sub images [0:disable 1:dividing to maxWidth, maxHeight 2:dividing half if width exceeds maxWidth]
   -mw MAX_WIDTH, --maxWidth MAX_WIDTH
			Maximum width to devide image
   -mh MAX_HEIGHT, --maxHeight MAX_HEIGHT
			Maximum height to devide image (only used for the divide mode 1)

Requirements

  • Python
  • Caffe (and other deepdream dependencies)
  • FFMPEG
  • CV2 (if you use optical flow)

Examples

1-extract.bat:

python dreamer.py -e 1 --input RHI-20Sec.mov --output Input

2-dream.bat:

python dreamer.py --input Input --output Output --octaves 3 --octavescale 1.8 --iterations 16 --jitter 32 --zoom 1 --stepsize 1.5 --flow 1 --flowthresh 6 --blendflow 0.9 --blendstatic 0.1 --layers inception_4d/pool --gpu 1 -d 2 -mw 1500

3-create.bat:

python dreamer.py -c 1 --input Output --output Video.mp4

Owner
Aertist
Aertist
Repository to create Ascii art in CMD based on video file.

Made to take any file format, and transform it into ascii art, displayed as a video in the cmd. If the cmd formatting is wrong, try zooming a little and remember to make cmd fullscreen. I made my cmd

60 Dec 17, 2022
Stream music with ffmpeg and python

youtube-stream Stream music with ffmpeg and python original Usage set the KEY in stream.sh run server.py run stream.sh (You can use Git bash or WSL in

Giyoung Ryu 14 Nov 17, 2021
pyffstream - A CLI frontend for streaming over SRT and RTMP specializing in sending off files

pyffstream - A CLI frontend for streaming over SRT and RTMP specializing in sending off files

Gregory Beauregard 3 Mar 04, 2022
Text2Video's purpose is to help people create videos quickly and easily by simply typing out the video’s script and a description of images to include in the video.

Text2Video Text2Video's purpose is to help people create videos quickly and easily by simply typing out the video’s script and a description of images

Josh Chen 19 Nov 22, 2022
A Python extension that provides bindings to WebRTC M92

This project follows the W3C specification with some modifications and additions to make it work better with Python applications, with useful APIs like programmatic audio and video.

Il'ya 104 Dec 26, 2022
Converts Betaflight blackbox gyro to MP4 GoPro Meta data so it can be used with ReelSteady GO

Here are a bunch of scripts that I created some time ago as a proof of concept that Betaflight blackbox gyro data can be converted to GoPro Metadata F

108 Oct 05, 2022
Video processing routines for SciPy

scikit-video Video Processing SciKit BETA Video processing algorithms, including I/O, quality metrics, temporal filtering, motion/object detection, mo

Alex Izvorski 119 Dec 27, 2022
Media player custom component which works with MQTT.

Media player custom component which works with MQTT. I designed this to specifically work with a ESP32 which i used to control a speakercraft amp.

2 Feb 10, 2022
Boltstream Live Video Streaming Website + Backend

Boltstream Self-hosted Live Video Streaming Website + Backend Reference

Ben Wilber 1.7k Dec 28, 2022
Jio TV Server - Watch TV right from your laptop

Jio-PyServer Jio TV - Python Server Watch TV right from your laptop! Requirements: Python 3.X Internet Access A Jio Account Known Issues: Channel Stre

Elvis Tony 11 Apr 05, 2022
video streaming userbot (vsu) based on pytgcalls for streaming video trought the telegram video chat group.

VIDEO STREAM USERBOT ✨ an another telegram userbot for streaming video trought the telegram video chat. Environmental Variables 📌 API_ID : Get this v

levina 6 Oct 17, 2021
Simple background blur for your webcam

backgroundblur Simple background blur for your webcam. This script will capture your webcams output, add a blur effect to the background and output th

Stefan Wagner 4 Dec 07, 2021
Tiny python video cutter

tiny_python_video_cutter Source code based on a discussion in StackOverflow Setup project in Pycharm: Configure virtual env in Pycharm. You are done w

Truong 2 May 28, 2022
Playing videos through S3 buckets (Wasabi, AWS, etc.) through client-side VideoJS player

Playing videos through S3 buckets (Wasabi, AWS, etc.) through client-side VideoJS player without incurring ingress/egree traffic on EC2 Instance.

Syed Umar Arfeen 8 Mar 28, 2022
Python Simple Mass Video Clipper (PSMVC)

Python Simple Mass Video Clipper (PSMVC) PSMVC é um gerador de cortes via terminal construído em python. Uso Basta abrir o arquivo start.py Dependenci

Bruno 2 Oct 16, 2021
camKapture is an open source application that allows users to access their webcam device and take pictures or create videos.

camKapture is an open source application that allows users to access their webcam device and take pictures or create videos.

manoj 1 Jun 21, 2022
pygamevideo module helps developer to embed videos into their Pygame display

pygamevideo module helps developer to embed videos into their Pygame display. Audio playback doesn't use pygame.mixer.

Kadir Aksoy 10 Dec 28, 2022
Real-time video and audio streams over the network, with Streamlit.

streamlit-webrtc Example You can try out the sample app using the following commands.

Yuichiro Tachibana (Tsuchiya) 648 Jan 01, 2023
A multithreaded view bot for YouTube

Simple program to increase YouTube views written in Python.

Monirul Shawon 906 Jan 09, 2023
Spotify playlist video generator

This program creates a video version of your Spotify playlist by using the Spotify API and YouTube-dl.

0 Mar 03, 2022