Video2x - A lossless video/GIF/image upscaler achieved with waifu2x, Anime4K, SRMD and RealSR.

Overview

GitHub release (latest by date) GitHub Workflow Status Docker Cloud Build Status GitHub All Releases GitHub Platforms Become A Patron!

Official Discussion Group (Telegram): https://t.me/video2x

A Discord server is also available. Please note that most developers are only on Telegram. If you join the Discord server, the developers might not be able to see your questions and help you. It is mostly for user-user interactions and those who do not want to use Telegram.

Download Stable/Beta Builds (Windows)

  • Full: full package comes pre-configured with all dependencies like FFmpeg and waifu2x-caffe.
  • Light: ligt package comes with only Video2X binaries and a template configuration file. The user will either have to run the setup script or install and configure dependencies themselves.

Go to the Quick Start section for usages.

Download From Mirror

In case you're unable to download the releases directly from GitHub, you can try downloading from the mirror site hosted by the author. Only releases will be updated in this directory, not nightly builds.

Download Nightly Builds (Windows)

You need to be logged into GitHub to be able to download GitHub Actions artifacts.

Nightly builds are built automatically every time a new commit is pushed to the master branch. The lates nightly build is always up-to-date with the latest version of the code, but is less stable and may contain bugs. Nightly builds are handled by GitHub's integrated CI/CD tool, GitHub Actions.

To download the latest nightly build, go to the GitHub Actions tab, enter the last run of workflow "Video2X Nightly Build, and download the artifacts generated from the run.

Docker Image

Video2X Docker images are available on Docker Hub for easy and rapid Video2X deployment on Linux and macOS. If you already have Docker installed, then only one command is needed to start upscaling a video. For more information on how to use Video2X's Docker image, please refer to the documentations.

Google Colab

You can use Video2X on Google Colab for free. Colab allows you too use a GPU on Google's Servers (Tesla K80, T4, P4, P100). Please bare in mind that Colab can only be provided for free if all users know that they shouldn't abuse it. A single free-tier tier session can last up to 12 hours. Please do not abuse the platform by creating sessions back-to-back and running upscaling 24/7. This might result in you getting banned.

Here is an example Notebook written by @Felixkruemel: Video2X_on_Colab.ipynb. This file can be used in combination of the following modified configuration file: @Felixkruemel's Video2X configuration for Google Colab.

Introduction

Video2X is a video/GIF/image upscaling software based on Waifu2X, Anime4K, SRMD and RealSR written in Python 3. It upscales videos, GIFs and images, restoring details from low-resolution inputs. Video2X also accepts GIF input to video output and video input to GIF output.

Currently, Video2X supports the following drivers (implementations of algorithms).

  • Waifu2X Caffe: Caffe implementation of waifu2x
  • Waifu2X Converter CPP: CPP implementation of waifu2x based on OpenCL and OpenCV
  • Waifu2X NCNN Vulkan: NCNN implementation of waifu2x based on Vulkan API
  • SRMD NCNN Vulkan: NCNN implementation of SRMD based on Vulkan API
  • RealSR NCNN Vulkan: NCNN implementation of RealSR based on Vulkan API
  • Anime4KCPP: CPP implementation of Anime4K

Video Upscaling

Spirited Away Demo
Upscale Comparison Demonstration

You can watch the whole demo video on YouTube: https://youtu.be/mGEfasQl2Zo

Clip is from trailer of animated movie "千と千尋の神隠し". Copyright belongs to "株式会社スタジオジブリ (STUDIO GHIBLI INC.)". Will delete immediately if use of clip is in violation of copyright.

GIF Upscaling

This original input GIF is 160x120 in size. This image is downsized and accelerated to 20 FPS from its original image.

catfru
Catfru original 160x120 GIF image

Below is what it looks like after getting upscaled to 640x480 (4x) using Video2X.

catfru4x
Catfru 4x upscaled GIF

Image Upscaling

jill_comparison
Image upscaling example

Original image from [email protected], edited by K4YT3X.

All Demo Videos

Below is a list of all the demo videos available. The list is sorted from new to old.


Screenshots

Video2X GUI

GUI Preview
Video2X GUI Screenshot

Video2X CLI

Video2X CLI Screenshot
Video2X CLI Screenshot


Sample Videos

If you can't find a video clip to begin with, or if you want to see a before-after comparison, we have prepared some sample clips for you. The quick start guide down below will also be based on the name of the sample clips.

sample_video
Sample Upscale Videos

Clip is from anime "さくら荘のペットな彼女". Copyright belongs to "株式会社アニプレックス (Aniplex Inc.)". Will delete immediately if use of clip is in violation of copyright.


Quick Start

Prerequisites

Before running Video2X, you'll need to ensure you have installed the drivers' external dependencies such as GPU drivers.

  • waifu2x-caffe
    • GPU mode: Nvidia graphics card driver
    • cuDNN mode: Nvidia CUDA and cuDNN
  • Other Drivers
    • GPU driver if you want to use GPU for processing

Running Video2X (GUI)

The easiest way to run Video2X is to use the full build. Extract the full release zip file and you'll get these files.

Video2X Release Files
Video2X release files

Simply double click on video2x_gui.exe to launch the GUI.

Video2X GUI Main Tab
Video2X GUI main tab

Then, drag the videos you wish to upscale into the window and select the appropriate output path.

drag-drop
Drag and drop file into Video2X GUI

Tweak the settings if you want to, then hit the start button at the bottom and the upscale will start. Now you'll just have to wait for it to complete.

upscale-started
Video2X started processing input files

Running Video2X (CLI)

Basic Upscale Example

This example command below uses waifu2x-caffe to enlarge the video sample-input.mp4 two double its original size.

python video2x.py -i sample-input.mp4 -o sample-output.mp4 -r 2 -d waifu2x_caffe

Advanced Upscale Example

If you would like to tweak engine-specific settings, either specify the corresponding argument after --, or edit the corresponding field in the configuration file video2x.yaml. Command line arguments will overwrite default values in the config file.

This example below adds enables TTA for waifu2x-caffe.

python video2x.py -i sample-input.mp4 -o sample-output.mp4 -r 2 -d waifu2x_caffe -- --tta 1

To see a help page for driver-specific settings, use -d to select the driver and append -- --help as demonstrated below. This will print all driver-specific settings and descriptions.

python video2x.py -d waifu2x_caffe -- --help

Running Video2X (Docker)

Video2X can be deployed via Docker. The following command upscales the video sample_input.mp4 two times with Waifu2X NCNN Vulkan and outputs the upscaled video to sample_output.mp4. For more details on Video2X Docker image usages, please refer to the documentations.

docker run --rm -it --gpus all -v /dev/dri:/dev/dri -v $PWD:/host k4yt3x/video2x:4.6.0 -d waifu2x_ncnn_vulkan -r 2 -i sample_input.mp4 -o sample_output.mp4

Documentations

Video2X Wiki

You can find all detailed user-facing and developer-facing documentations in the Video2X Wiki. It covers everything from step-by-step instructions for beginners, to the code structure of this program for advanced users and developers. If this README page doesn't answer all your questions, the wiki page is where you should head to.

Step-By-Step Tutorial

For those who want a detailed walk-through of how to use Video2X, you can head to the Step-By-Step Tutorial wiki page. It includes almost every step you need to perform in order to enlarge your first video.

Run From Source Code

This wiki page contains all instructions for how you can run Video2X directly from Python source code.

Drivers

Go to the Drivers wiki page if you want to see a detailed description on the different types of drivers implemented by Video2X. This wiki page contains detailed difference between different drivers, and how to download and set each of them up for Video2X.

Q&A

If you have any questions, first try visiting our Q&A page to see if your question is answered there. If not, open an issue and we will respond to your questions ASAP. Alternatively, you can also join our Telegram discussion group and ask your questions there.

History

Are you interested in how the idea of Video2X was born? Do you want to know the stories and histories behind Video2X's development? Come into this page.


License

Licensed under the GNU General Public License Version 3 (GNU GPL v3) https://www.gnu.org/licenses/gpl-3.0.txt

GPLv3 Icon

(C) 2018-2021 K4YT3X

Credits

This project relies on the following software and projects.

Special Thanks

Appreciations given to the following personnel who have contributed significantly to the project (specifically the technical perspective).

Related Projects

  • Dandere2x: A lossy video upscaler also built around waifu2x, but with video compression techniques to shorten the time needed to process a video.
  • Waifu2x-Extension-GUI: A similar project that focuses more and only on building a better graphical user interface. It is built using C++ and Qt5, and currently only supports the Windows platform.
You might also like...
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.

vid2vid Project | YouTube(short) | YouTube(full) | arXiv | Paper(full) Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic vid

PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.
PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.

PySlowFast PySlowFast is an open source video understanding codebase from FAIR that provides state-of-the-art video classification models with efficie

Eff video representation - Efficient video representation through neural fields

Neural Residual Flow Fields for Efficient Video Representations 1. Download MPI

Video-face-extractor - Video face extractor with Python

Python face extractor Setup Create the srcvideos and faces directories Put your

[CVPR 2022] Official PyTorch Implementation for
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

This repository contains several image-to-image translation models, whcih were tested for RGB to NIR image generation. The models are Pix2Pix, Pix2PixHD, CycleGAN and PointWise.

RGB2NIR_Experimental This repository contains several image-to-image translation models, whcih were tested for RGB to NIR image generation. The models

Face Mask Detection on Image and Video using tensorflow and keras
Face Mask Detection on Image and Video using tensorflow and keras

Face-Mask-Detection Face Mask Detection on Image and Video using tensorflow and keras Train Neural Network on face-mask dataset using tensorflow and k

Comments
  • Upscaling taking too long

    Upscaling taking too long

    Hi! Im trying to upscale a video using video2x. running this: - python3 video2x-4.7.0/src/video2x.py -i ./test.mp4 -o ./upsized.mp4 -d realsr_ncnn_vulkan -h 512 -w 512 it says it will take aprox 26 hours to run.

    its a 24 second long video 60 frame video, is this normal?.

    Im running it on WSL, Ubuntu 22.04.1 LTS. Python -V : 3.10.6

    These are my pc specs: GPU: Nvidia geforce RTX 3070 8GB VRAM Processor Intel(R) Core(TM) i9-10900KF CPU @ 3.70GHz 3.70 GHz Installed RAM 16.0 GB DDR4

    I dont know if its taking full advantage of my graphics card, because i saw the GPU Cuda tab on the task manager very low. The CPU in the other hand was at 99% for the 2 hours i let the thing run, i cancelled because i thought 26 hours was just to long to have my computer make the upscaling...

    Could you help me with this please?.

    Regards

    opened by fmPeretti 1
  • Can't I make a video with the backed up dump folder?

    Can't I make a video with the backed up dump folder?

    I did upscaling for the first time. I waited 17 hours and when I checked the final result, there was a problem. When I tried again, I had to wait another 17 hours.

    I backed up all 71GB of upscale images before the Tump folder was deleted. Can't I make a video with the image I backed up? The original video is 8 minutes and 5 seconds long.

    Please help me .

    opened by CHANEL2 0
  • 5.0.0-beta-6 is working, but I'm getting awful fps.

    5.0.0-beta-6 is working, but I'm getting awful fps.

    After tinkering with this thing for hours, getting docker and nvidia-docker2 set up, the thing finally appears to be running, with the following command:

    sudo docker run -it --rm --privileged --gpus='all,"capabilities=compute,utility,graphics,display"' --env DISPLAY:$DISPLAY -v $PWD:/host ghcr.io/k4yt3x/video2x:5.0.0-beta6 -i input.mp4 -o output.mp4 -p1 upscale -h 720 -a waifu2x -n3

    ...but I'm getting a lousy frame rate, like between .17 and .56 fps. My laptop's 3070 is not engaging at all.

    `(base) [email protected]:~/Archive/temp1$ sudo docker run -it --rm --privileged --gpus='all,"capabilities=compute,utility,graphics,display"' --env DISPLAY:$DISPLAY -v $PWD:/host ghcr.io/k4yt3x/video2x:5.0.0-beta6 -i rmvhsm.mp4 -o output.mp4 -p1 upscale -h 720 -a waifu2x -n3
    22:39:54.057817 | INFO     | Video2X 5.0.0-beta6
    22:39:54.057944 | INFO     | Copyright (C) 2018-2022 K4YT3X and contributors.
    22:39:54.058000 | INFO     | Reading input video information
    22:39:54.108182 | INFO     | Starting video decoder
    22:39:54.110863 | INFO     | Starting video encoder
    [0 Intel(R) UHD Graphics (TGL GT1)]  queueC=0[1]  queueG=0[1]  queueT=0[1]
    [0 Intel(R) UHD Graphics (TGL GT1)]  bugsbn1=0  bugbilz=0  bugcopc=0  bugihfa=0
    [0 Intel(R) UHD Graphics (TGL GT1)]  fp16-p/s/a=1/1/1  int8-p/s/a=1/1/1
    [0 Intel(R) UHD Graphics (TGL GT1)]  subgroup=32  basic=1  vote=1  ballot=1  shuffle=1
    [1 NVIDIA GeForce RTX 3070 Laptop GPU]  queueC=2[8]  queueG=0[16]  queueT=1[2]
    [1 NVIDIA GeForce RTX 3070 Laptop GPU]  bugsbn1=0  bugbilz=0  bugcopc=0  bugihfa=0
    [1 NVIDIA GeForce RTX 3070 Laptop GPU]  fp16-p/s/a=1/1/1  int8-p/s/a=1/1/1
    [1 NVIDIA GeForce RTX 3070 Laptop GPU]  subgroup=32  basic=1  vote=1  ballot=1  shuffle=1
    ../src/intel/isl/isl.c:2105: FINISHME: ../src/intel/isl/isl.c:isl_surf_supports_ccs: CCS for 3D textures is disabled, but a workaround is available.
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'rmvhsm.mp4':
      Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2avc1mp41
        encoder         : Lavf56.36.100
      Duration: 00:05:00.05, start: 0.000000, bitrate: 266 kb/s
        Stream #0:0(eng): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), 
    yuv420p, 768x432 [SAR 1:1 DAR 16:9], 207 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc
    (default)
        Metadata:
          handler_name    : VideoHandler
        Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 22050 Hz, stereo, 
    fltp, 55 kb/s (default)
        Metadata:
          handler_name    : SoundHandler
    Stream mapping:
      Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
    Output #0, rawvideo, to 'pipe:1':
      Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2avc1mp41
        encoder         : Lavf58.29.100
        Stream #0:0(eng): Video: rawvideo (RGB[24] / 0x18424752), rgb24, 768x432 
    [SAR 1:1 DAR 16:9], q=2-31, 199065 kb/s, 25 fps, 25 tbn, 25 tbc (default)
        Metadata:
          handler_name    : VideoHandler
          encoder         : Lavc58.54.100 rawvideo
    
    Input #0, rawvideo, from 'pipe:0':
      Duration: N/A, start: 0.000000, bitrate: 552960 kb/s
        Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 1280x720, 552960
    kb/s, 25 tbr, 25 tbn, 25 tbc
    Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'rmvhsm.mp4':
      Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2avc1mp41
        encoder         : Lavf56.36.100
      Duration: 00:05:00.05, start: 0.000000, bitrate: 266 kb/s
        Stream #1:0(eng): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), 
    yuv420p, 768x432 [SAR 1:1 DAR 16:9], 207 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc
    (default)
        Metadata:
          handler_name    : VideoHandler
        Stream #1:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 22050 Hz, stereo, 
    fltp, 55 kb/s (default)
        Metadata:
          handler_name    : SoundHandler
    Stream mapping:
      Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
      Stream #1:1 -> #0:1 (aac (native) -> aac (native))
    [libx264 @ 0x559f0452a4c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 
    AVX FMA3 BMI2 AVX2 AVX512
    [libx264 @ 0x559f0452a4c0] profile High, level 5.0
    [libx264 @ 0x559f0452a4c0] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec
    - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 
    ref=16 deblock=1:0:0 analyse=0x3:0x133 me=umh subme=10 psy=1 psy_rd=1.00:0.00 
    mixed_ref=1 me_range=24 chroma_me=1 trellis=2 8x8dct=1 cqm=0 deadzone=21,11 
    fast_pskip=1 chroma_qp_offset=-2 threads=22 lookahead_threads=3 sliced_threads=0
    nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=8 
    b_pyramid=2 b_adapt=2 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 
    keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=60 rc=crf 
    mbtree=1 crf=17.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'output.mp4':
      Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2avc1mp41
        comment         : Processed with Video2X
        encoder         : Lavf58.29.100
        Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), 
    yuv420p(progressive), 1280x720, q=-1--1, 25 fps, 12800 tbn, 25 tbc
        Metadata:
          encoder         : Lavc58.54.100 libx264
        Side data:
          cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
        Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 22050 Hz, stereo, 
    fltp, 128 kb/s (default)
        Metadata:
          handler_name    : SoundHandler
          encoder         : Lavc58.54.100 aac
    
    Upscaling ━╸━━━━━━━━━━━━━━━━━━━━━━━━━   7% (546/7500) 0.56 FPS 0:16:22 < 3:27:42`
    

    Any assistance would be appreciated. Thanks.

    opened by jsly8672 0
  • Google Collab -

    Google Collab - "Expected Output File Does Not Exist"

    Hi, I'm trying to use the provided Google Collab page to upscale a video. All the steps on the website are working fine, except the very last one which gives the "Expected output file does not exist" error ( see attached image).

    I'm not a programmer so if anyone can give advice on how to fix this issue, I would be very grateful.

    Link to the website: https://colab.research.google.com/github/pollinations/hive/blob/main/notebooks/7%20Video-To-Video/2%20Video2X.ipynb#scrollTo=AGiMH-UOKgRh

    Error Screenshot: Capture

    Thanks

    opened by Klimt1234 0
Releases(5.0.0-beta5)
  • 5.0.0-beta5(Apr 1, 2022)

    • Added support for RealCUGAN ncnn Vulkan
    • Added pause function
      • Use the global hotkey Ctrl+Alt+V or send SIGUSR1 to the main process to pause
    • Fixed various problems with the progress bar

    The container has also been patched so it will run fine on both AMD and NVIDIA GPUs. The required Docker arguments have changed for AMD GPUs. See the Wiki page for more information.

    Source code(tar.gz)
    Source code(zip)
  • 5.0.0-beta4(Feb 19, 2022)

  • 5.0.0-beta3(Feb 15, 2022)

    • Changed the use of term "driver" to "algorithm" since the backend implementation is now mostly abstracted from the user.
      • The argument -d, --driver has been renamed to -a, --algorithm.
    Source code(tar.gz)
    Source code(zip)
  • 5.0.0-beta2(Feb 15, 2022)

  • 5.0.0-beta1(Feb 11, 2022)

    ⚠️ Known Issues

    • Errors in handling queued images results in abnormal memory consumption. The program will fill your RAM in a short time if you attempt to process a large video file. This issue has been fixed in 5.0.0-beta2.
    Source code(tar.gz)
    Source code(zip)
  • 4.8.1(Dec 21, 2020)

    Bug Fixes

    • Fixed Anime4KCPP driver import error
    • Rolled Gifski back to a version without output bugs

    Checksums:

    • video2x-4.8.1-win32-light.zip
      • SHA-256: 9C746151CBFA432887759FB783C78A70ED273A6A4008C0BF030220FCD79D29BE
    • video2x-4.8.1-win32-full.zip
      • SHA-256: 45FCE1334762B6BF8190FE0603C8F16087BC225AFA2D87D1B472B98D4DB1C048

    ⚠️ Known Issues

    • Warning: Some users have reported that the software did not make it clear that the cache folder will be deleted after the upscale is completed. If you want to use a custom cache folder, make sure to use a designated folder with no other contents in it as the cache folder will be wiped after upscaling is done. A warning will be added into the GUI in the upcoming version.

    Update

    The light version has been taken off since some upstream changes broke the setup script.

    Source code(tar.gz)
    Source code(zip)
    video2x-4.8.1-win32-full.zip(999.91 MB)
  • 4.8.0(Dec 13, 2020)

    New Features

    • Updated waifu2x-ncnn-vulkan arguments
    • Added custom frame format for waifu2x/srmd/realsr-ncnn-vulkan
    • Re-added mimetypes fallback for MIME type detection for if python-magic fails to detect the correct MIME types
    • Console subprocess.Popen outputs are now redirected to the error log instead of the console output

    Bug Fixes

    • Fixed setup script FFmpeg invalid provider
    • Fixed GUI hwaccel value overwrite issue
    • Fixed 1.0 upscale ratio issues
    • Fixed setup script Gifski installation issues

    Checksums:

    • video2x-4.8.0-win32-light.zip
      • SHA-256: C3E7C235CF2922266CA15AEB631C92849E3312B0A35BB700A4C310B4A93EE4D9
    • video2x-4.8.0-win32-full.zip
      • SHA-256: 80CD7D491BA96537A492C2E0931E3B34321006308A71142E4148376F7C90173B
    Source code(tar.gz)
    Source code(zip)
    video2x-4.8.0-win32-full.zip(997.71 MB)
    video2x-4.8.0-win32-light.zip(68.20 MB)
  • 4.7.0(Sep 14, 2020)

    New Features

    • Arbitrary scaling ratio support for all drivers
    • Arbitrary scaling resolution (width/height) support for all drivers
      • Allows only one of width/height to be specified and the other side will be calculated automatically according to the scaling ratio
    • Removed the use of shlex.join and := to make the program compatible with Python 3.6 and higher
    • Updated UI design for better visibility of the preview image
    • Redesigned logging system and error reporting system
      • Logs can now be saved optionally when the program runs into an error
      • If unspecified, logs will be deleted automatically upon normal exit of the program
      • Logs are now written to a tempfile.TemporaryFile object by default instead of a conventional log file
    • Various other improvements and code optimizations

    Bug Fixes

    • Fixed waifu2x-caffe output quality option
    • Fixed Gifski output resolution issues

    Other Notes

    • waifu2x-caffe has recently upgraded their CUDA Toolkit (11.0) and cuDNN (8.0.3) versions. This makes it incompatible with some of NVIDIA's older models of GPUs
      • To run the newest waifu2x-caffe (version 1.2.0.4 at the time of writing this note), your GPU must have a Compute Capability >= 3.5
      • Check this link for a list of NVIDIA's GPUs and their Compute Capabilities

    Checksums:

    • video2x-4.7.0-win32-light.zip
      • SHA-256: 6D5A5BF9C3CCC7CFE3FED369D80BBD24C690336BFB3D43A8853E8D30485E68FE
    • video2x-4.7.0-win32-full.zip
      • SHA-256: 18104BE7FD5E355EA77F4DB10D2AD8A9DB60201FB736CF3E0B6E45043620B43C
    Source code(tar.gz)
    Source code(zip)
    video2x-4.7.0-win32-full.zip(1005.81 MB)
    video2x-4.7.0-win32-light.zip(131.20 MB)
  • 4.6.1(Sep 1, 2020)

  • 4.6.0(Jun 8, 2020)

    In this release:

    New Features

    • Anime4KCPP 2.0 support (new CNNMode)
    • Customizable output file extensions
    • Added disable logging check box in GUI
    • Various code optimizations and other improvements

    Bug Fixes

    • Fixed GUI file-deletion-related bugs
    • Fixed more python-magic issues, backing off to mimetypes when python-magic fails
    • Fixed config loading issues in exe (frozen) mode
    • Fixed duplicate file name race condition issues

    Docker and Linux

    • Added Ubuntu setup script
    • Dockerfile now uses Ubuntu setup script
    • Ubuntu setup script and Dockerfile now downloads waifu2x/srmd/realsr-ncnn-vulkan releases instead of compiling them
    • Fixed lots of other issues existed in the previous versions

    image

    Checksums:

    • video2x-4.6.0-win32-light.zip
      • SHA-256: 8D11B511FC46ACAB353343B5A67464178401D55D4A741B20C62AF199C9121147
    • video2x-4.6.0-win32-full.zip
      • SHA-256: E8D6E809219FC308A84ACF3D685AAD95AE9A5E57D212AB0E893ADE31C6FC4AD6
    Source code(tar.gz)
    Source code(zip)
    video2x-4.6.0-win32-full.zip(494.55 MB)
    video2x-4.6.0-win32-light.zip(73.21 MB)
  • 4.5.0(Jun 5, 2020)

  • 4.4.1(May 30, 2020)

    In this release:

    • Updated the setup script for new waifu2x/srmd/realsr-ncnn-vulkan release structure
    • Updated waifu2x/srmd/realsr-ncnn-vulkan packages
    • More comments in the config file

    Checksums:

    • video2x-4.4.1-win32-light.zip
      • SHA-256: A680385B1CA958A206B22D8AA947A3E5B33EA7FE15AD3C755ED4140AE361D8CA
    • video2x-4.4.1-win32-full.zip
      • SHA-256: 7ECAA400582169DCF81A69309A9A5B9BDB17D8411669711DE9F0278EF504A57B
    Source code(tar.gz)
    Source code(zip)
    video2x-4.4.1-win32-full.zip(499.11 MB)
    video2x-4.4.1-win32-light.zip(73.24 MB)
  • 4.4.0(May 29, 2020)

  • 4.3.0(May 23, 2020)

    Changes in this release:

    • Using mimetypes as a backup method for mime type detection when python-magic fails
    • Added GUI shortcut keys
    • Added H264/H265 tune option
    • Requiring CLI input/output path to be specified when help is not specified
    • Some other minor tweaks and fixes

    image

    Checksums:

    • video2x-4.3.0-win32-light.zip
      • SHA-256: 4E130B7C50507FC4FAC8CDBF8EDE8BFE78AAFA420513CEE734D1C2B2EDBB0180
    • video2x-4.3.0-win32-full.zip
      • SHA-256: 58F7F3822A1F25C3A7A22570CA25010891700D25DDD1CAB3FA196F4D276F40A9
    Source code(tar.gz)
    Source code(zip)
    video2x-4.3.0-win32-full.zip(447.34 MB)
    video2x-4.3.0-win32-light.zip(73.26 MB)
  • 4.2.0(May 18, 2020)

    Changes in this release:

    • Added FFmpeg frame interpolation option (minterpolate)
    • Added stopping confirmation dialog
    • Changed default output pixel format to yuv420p for better compatibility
    • GUI component minor tweaks

    image

    Checksums:

    • video2x-4.2.0-win32-light.zip
      • SHA-256: A49043384C4C138C981A2C09F8EB174A6A15099DB42C355F4CF49CDB8BB3B0EB
    • video2x-4.2.0-win32-full.zip
      • SHA-256: AC5D19C7AE0069E8BF7CBA85C042E8F5C4E080BA997DCF98C24C497634115E87
    Source code(tar.gz)
    Source code(zip)
    video2x-4.2.0-win32-full.zip(447.34 MB)
    video2x-4.2.0-win32-light.zip(73.26 MB)
  • 4.1.0(May 16, 2020)

    Changes in this release:

    • Added FFmpeg settings in GUI
    • Added Gifski settings in GUI
    • Added FFprobe tool in GUI
    • Display upscaler version number in about page
    • Fixed setup script pyunpack PyInstaller bugs (or maybe not bugs)
    • Various bug fixes and enhancements. Check commit history for details

    image

    Checksums:

    • video2x-4.1.0-win32-light.zip
      • SHA-256: 67913323B456207F6B262ACD1CA764A07A8B7C58DACFFEC3E2E28224F442B82A
    • video2x-4.1.0-win32-full.zip
      • SHA-256: 73598F869C19F68185CDFC2743EE591BD2042E6F3CF107B17FDD44CAC6FBFE57
    Source code(tar.gz)
    Source code(zip)
    video2x-4.1.0-win32-full.zip(443.35 MB)
    video2x-4.1.0-win32-light.zip(73.26 MB)
  • 4.0.0(May 11, 2020)

    New features in 4.0.0 release as compared to the previous stable release:

    • Added internationalization support for CLI
      • Added language zh_CN (简体中文)
      • Language will change automatically according to system locale settings
    • Added support for Anime4KCPP in replacement for Anime4K (Java)
    • Driver-specific settings can now be specified in the command line by specifying them after a --
    • All driver-specific settings are parsed by the corresponding driver
    • Modularized driver wrappers in Video2X
    • Completely redesigned GUI
    • Added stop button (or actually making it do something)
    • Added time elapsed, time remaining and processing speed under progress bar
    • Added environment variable expansion for paths (e.g., %LOCALAPPDATA%\video2x)
    • Redesigned exception handling
    • Added soft interruption. setting self.stop_signal = True in Upscaler object stops execution
    • Added comments for all drivers in the config file
    • Added folder processing functionalities for GUI
    • Added drag and drop support
    • Upgraded input line edit to table view
    • Redesigned UI progress display
    • Other tweaks and minor bug fixes
    • Added drag and drop support
    • Upgraded input line edit to table view
    • Redesigned UI progress display
    • Other tweaks and minor bug fixes
    • Cleaned up some clutters in the code

    New in this release compared to beta versions:

    • Frame preview
    • Output intermediate file if stream migration fails instead of losing all progress
    • More GUI options for waifu2x-converter-cpp
    • Better about and error dialogs
    • Redesigned driver argument parsing procedure that's more accurate
    • SRMD-NCNN-Vulkan and Waifu2X-NCNN-Vulkan native multi-threading support
    • Bug fixes for preview releases

    image

    Checksums:

    • video2x-4.0.0-win32-light.zip
      • SHA-256: 55D68C19A986CCC28E7D49888A0541914F6F00E203EE46D519CC7C28F6A0A3BC
    • video2x-4.0.0-win32-full.zip
      • SHA-256: 977AC9201A7C4BAA3A18E84AF3622E07A70A441C3821F8D7F361D1680F482E7B
    Source code(tar.gz)
    Source code(zip)
    video2x-4.0.0-win32-full.zip(447.07 MB)
    video2x-4.0.0-win32-light.zip(70.93 MB)
  • 4.0.0-beta3(May 9, 2020)

    New in this release:

    • Added drag and drop support
    • Upgraded input line edit to table view
    • Redesigned UI progress display
    • Other tweaks and minor bug fixes

    image

    Checksums:

    • video2x-4.0.0_beta3-win32-light.zip
      • SHA-256: 2D724B97CFB8DA4373347B6BE0E81A5346FC628F6F24E764952D1BC117800A5C
    • video2x-4.0.0_beta3-win32-full.zip
      • SHA-256: 20DEBE7442AE4BFC90D3711EB9D5342D03C8393B5B2668BE6F28B4F1617D91E7
    Source code(tar.gz)
    Source code(zip)
    video2x-4.0.0_beta3-win32-full.zip(446.90 MB)
    video2x-4.0.0_beta3-win32-light.zip(70.78 MB)
  • 4.0.0-beta2(May 7, 2020)

    Lots of improvements and bug fixes compared to the previous version.

    • Added stop button (or actually making it do something)
    • Added time elapsed, time remaining and processing speed under progress bar
    • Added environment variable expansion for paths (e.g., %LOCALAPPDATA%\video2x)
    • Redesigned exception handling
    • Added soft interruption. setting self.stop_signal = True in Upscaler object stops execution
    • Added comments for all drivers in the config file
    • Added folder processing functionalities for GUI

    image

    Checksums:

    • video2x-4.0.0_beta2-win32-light.zip
      • SHA-256: 0641CB40ADB2B0D13B6CEE8D289D44B3C190B5E73F98524203A454C8AFC14D2C
    • video2x-4.0.0_beta2-win32-full.zip
      • SHA-256: 3AA61A29DD2945C3F8B22CE0F1E4DEE668D2A2D7975BA9691F27E97E010FF180
    Source code(tar.gz)
    Source code(zip)
    video2x-4.0.0_beta2-win32-full.zip(440.74 MB)
    video2x-4.0.0_beta2-win32-light.zip(70.76 MB)
  • 4.0.0-beta1(May 7, 2020)

    This pre-release contains an experimental version of Video2X CLI 4.0.0 and GUI 2.0.0. Both the CLI and GUI are completely redesigned. This version is still in beta testing, and is yet stable.

    image

    Checksums:

    • video2x-4.0.0_beta1-win32-light.zip
      • SHA-256: 5391085E2D14A234A80E3953F0C3AE531FDC8917309BFC38BD8DBA42C342B088
    • video2x-4.0.0_beta1-win32-full.zip
      • SHA-256: C6A54C257DDAB4AB3766F01D5B35BCC29745DF779F033364D4D70A6BBCBC4DD4
    Source code(tar.gz)
    Source code(zip)
    video2x-4.0.0_beta1-win32-full.zip(446.86 MB)
    video2x-4.0.0_beta1-win32-light.zip(70.75 MB)
  • 3.0.0(Feb 20, 2020)

    The 3.0.0 release contains the newest version of the code after Linux compatibility has been added. For Windows users, this version resolves bugs such as temp directory cleaning issues and some logical errors that might trigger crashes.

    Since the PE files are unsigned, it's recommended to do a checksum before executing anything.

    Checksums:

    • video2x-3.0.0-win32-light.zip
      • SHA-256: E64CE16C657E8AD5CDCCE01360D669BF86A929DF52B90AAE743545BCD0BC7674
    • video2x-3.0.0-win32-full.zip
      • SHA-256: 8F17D3F8EC59C2CF9F70F89C12810E2AA741DBDC05ADCA1C85DD80C6C01ED3B8
    Source code(tar.gz)
    Source code(zip)
    video2x-3.0.0-win32-full.zip(354.90 MB)
    video2x-3.0.0-win32-light.zip(42.20 MB)
  • 2.10.0(Sep 28, 2019)

  • 2.9.0(Aug 3, 2019)

  • 2.7.1(May 18, 2019)

    This release addressed some bugs that existed in the previous version, where the most important one being that the FFmpeg arguments are not passed in the correct order.

    Checksums:

    • video2x-2.7.1-win32-full.zip
      • SHA224: f771bf66d98e5869fade15dc3d8ebcfc68f6cc7f6160f97c47123058
      • MD5: 7efe61175ffce26ca4b81d99f77e4b15
    • video2x-2.7.1-win32-light.zip
      • SHA224: 8974fa2f400c690fd7088501a82c8580e7a69957219024a532cc9327
      • MD5: a7175c31eb1805c58523bc3eb610d2e8
    Source code(tar.gz)
    Source code(zip)
    video2x-2.7.1-win32-full.zip(308.78 MB)
    video2x-2.7.1-win32-light.zip(22.40 MB)
  • 2.7.0(Mar 31, 2019)

    This is the first release of Video2X in exe format. Files are packaged using pyinstaller 3.4 and Python 3.7.2. If you have any suggestions for the release format, please open a new issue to leave a comment.

    The full package contains ffmpeg, waifu2x-caffe and waifu2x-converter-cpp. The configuration file is already configured for the environment. The light package only contains the basic video2x.exe and video2x_setup.exe. To setup dependencies, run the video2x_setup.exe.

    Checksums:

    • video2x-2.7.0-win32-full.zip
      • SHA224: ccc9f55d27ddac482d6823b5737b21d1576b47149851b48a1a69f57e
      • MD5: 20620c557af208eb23583795b187d78e
    • video2x-2.7.0-win32-light.zip
      • SHA224: 5663932385db8734acdc2ebf6d6fb5b7e57c002b16d77b9c77458a3e
      • MD5: 91760658f717becfdf9e4759a533931f
    Source code(tar.gz)
    Source code(zip)
    video2x-2.7.0-win32-full.zip(308.79 MB)
    video2x-2.7.0-win32-light.zip(22.39 MB)
Owner
K4YT3X
所謂的正確之物會隨人們各自的意志而遷移無常。| The so-called correctness will change with people's will.
K4YT3X
A python library for face detection and features extraction based on mediapipe library

FaceAnalyzer A python library for face detection and features extraction based on mediapipe library Introduction FaceAnalyzer is a library based on me

Saifeddine ALOUI 14 Dec 30, 2022
Learning Modified Indicator Functions for Surface Reconstruction

Learning Modified Indicator Functions for Surface Reconstruction In this work, we propose a learning-based approach for implicit surface reconstructio

4 Apr 18, 2022
Ensembling Off-the-shelf Models for GAN Training

Data-Efficient GANs with DiffAugment project | paper | datasets | video | slides Generated using only 100 images of Obama, grumpy cats, pandas, the Br

MIT HAN Lab 1.2k Dec 26, 2022
Notebooks for my "Deep Learning with TensorFlow 2 and Keras" course

Deep Learning with TensorFlow 2 and Keras – Notebooks This project accompanies my Deep Learning with TensorFlow 2 and Keras trainings. It contains the

Aurélien Geron 1.9k Dec 15, 2022
A small fun project using python OpenCV, mediapipe, and pydirectinput

Here I tried a small fun project using python OpenCV, mediapipe, and pydirectinput. Here we can control moves car game when yellow color come to right box (press key 'd') left box (press key 'a') lef

Sameh Elisha 3 Nov 17, 2022
Image reconstruction done with untrained neural networks.

PyTorch Deep Image Prior An implementation of image reconstruction methods from Deep Image Prior (Ulyanov et al., 2017) in PyTorch. The point of the p

Atiyo Ghosh 192 Nov 30, 2022
Official implementation of Representer Point Selection via Local Jacobian Expansion for Post-hoc Classifier Explanation of Deep Neural Networks and Ensemble Models at NeurIPS 2021

Representer Point Selection via Local Jacobian Expansion for Classifier Explanation of Deep Neural Networks and Ensemble Models This repository is the

Yi(Amy) Sui 2 Dec 01, 2021
Visual Adversarial Imitation Learning using Variational Models (VMAIL)

Visual Adversarial Imitation Learning using Variational Models (VMAIL) This is the official implementation of the NeurIPS 2021 paper. Project website

14 Nov 18, 2022
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.

Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition This is the research repository for Vid2

Future Interfaces Group (CMU) 26 Dec 24, 2022
Reinforcement Learning for finance

Reinforcement Learning for Finance We apply reinforcement learning for stock trading. Fetch Data Example import utils # fetch symbols from yahoo fina

Tomoaki Fujii 159 Jan 03, 2023
Do Neural Networks for Segmentation Understand Insideness?

This is part of the code to reproduce the results of the paper Do Neural Networks for Segmentation Understand Insideness? [pdf] by K. Villalobos (*),

biolins 0 Mar 20, 2021
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 360 Dec 10, 2022
YOLOv5🚀 reproduction by Guo Quanhao using PaddlePaddle

YOLOv5-Paddle YOLOv5 🚀 reproduction by Guo Quanhao using PaddlePaddle 支持AutoBatch 支持AutoAnchor 支持GPU Memory 快速开始 使用AIStudio高性能环境快速构建YOLOv5训练(PaddlePa

QuanHao Guo 20 Nov 14, 2022
official implementation for the paper "Simplifying Graph Convolutional Networks"

Simplifying Graph Convolutional Networks Updates As pointed out by #23, there was a subtle bug in our preprocessing code for the reddit dataset. After

Tianyi 727 Jan 01, 2023
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Facebook Research 5.1k Jan 04, 2023
A collection of differentiable SVD methods and also the official implementation of the ICCV21 paper "Why Approximate Matrix Square Root Outperforms Accurate SVD in Global Covariance Pooling?"

Differentiable SVD Introduction This repository contains: The official Pytorch implementation of ICCV21 paper Why Approximate Matrix Square Root Outpe

YueSong 32 Dec 25, 2022
A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes iGibson is a simulation environment providing fast visual rend

Stanford Vision and Learning Lab 493 Jan 04, 2023
A general and strong 3D object detection codebase that supports more methods, datasets and tools (debugging, recording and analysis).

ALLINONE-Det ALLINONE-Det is a general and strong 3D object detection codebase built on OpenPCDet, which supports more methods, datasets and tools (de

Michael.CV 5 Nov 03, 2022
Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXとByteTrackを用いたMOT(Multiple Object Tracking)のPythonサン

KazuhitoTakahashi 12 Nov 09, 2022
Qt-GUI implementation of the YOLOv5 algorithm (ver.6 and ver.5)

YOLOv5-GUI 🎉 YOLOv5算法(ver.6及ver.5)的Qt-GUI实现 🎉 Qt-GUI implementation of the YOLOv5 algorithm (ver.6 and ver.5). 基于YOLOv5的v5版本和v6版本及Javacr大佬的UI逻辑进行编写

EricFang 12 Dec 28, 2022