PyTorch implementation of "VRT: A Video Restoration Transformer"

Overview

VRT: A Video Restoration Transformer

Jingyun Liang, Jiezhang Cao, Yuchen Fan, Kai Zhang, Rakesh Ranjan, Yawei Li, Radu Timofte, Luc Van Gool

Computer Vision Lab, ETH Zurich & Meta Inc.


arxiv | supplementary | pretrained models | visual results

arXiv GitHub Stars download visitors

This repository is the official PyTorch implementation of "VRT: A Video Restoration Transformer" (arxiv, supp, pretrained models, visual results). VRT ahcieves state-of-the-art performance (up to 2.16dB) in

  • video SR (REDS, Vimeo90K, Vid4 and UDM10)
  • video deblurring (GoPro, DVD and REDS)
  • video denoising (DAVIS and Set8)

🚀 🚀 🚀 News:

  • Jan. 26, 2022: See our previous works on
Topic Title Badge
transformer-based image restoration SwinIR: Image Restoration Using Swin Transformer, ICCVW2021 arXivGitHub Starsdownload google colab logo
real-world image SR Designing a Practical Degradation Model for Deep Blind Image Super-Resolution, ICCV2021 arXivGitHub Starsvisitors
normalizing flow-based image SR and image rescaling Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling, ICCV2021 arXivGitHub Starsdownload google colab logo
blind image SR Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution, ICCV2021 arXivGitHub Starsdownload google colab logo
blind image SR Flow-based Kernel Prior with Application to Blind Super-Resolution, CVPR2021 arXivGitHub Stars

Video restoration (e.g., video super-resolution) aims to restore high-quality frames from low-quality frames. Different from single image restoration, video restoration generally requires to utilize temporal information from multiple adjacent but usually misaligned video frames. Existing deep methods generally tackle with this by exploiting a sliding window strategy or a recurrent architecture, which either is restricted by frame-by-frame restoration or lacks long-range modelling ability. In this paper, we propose a Video Restoration Transformer (VRT) with parallel frame prediction and long-range temporal dependency modelling abilities. More specifically, VRT is composed of multiple scales, each of which consists of two kinds of modules: temporal mutual self attention (TMSA) and parallel warping. TMSA divides the video into small clips, on which mutual attention is applied for joint motion estimation, feature alignment and feature fusion, while self-attention is used for feature extraction. To enable cross-clip interactions, the video sequence is shifted for every other layer. Besides, parallel warping is used to further fuse information from neighboring frames by parallel feature warping. Experimental results on three tasks, including video super-resolution, video deblurring and video denoising, demonstrate that VRT outperforms the state-of-the-art methods by large margins (up to 2.16 dB) on nine benchmark datasets.

Contents

  1. Requirements
  2. Training
  3. Quick Testing
  4. Results
  5. Citation
  6. License and Acknowledgement

Requirements

  • Python 3.8, PyTorch >= 1.9.1
  • Requirements: see requirements.txt
  • Platforms: Ubuntu 18.04, cuda-11.1

Quick Testing

Following commands will download pretrained models and test datasets automatically (except Vimeo-90K testing set). If out-of-memory, try to reduce --tile at the expense of decreased performance.

# download code
git clone https://github.com/JingyunLiang/VRT
cd VRT
pip install -r requirements.txt

# 001, video sr trained on REDS (6 frames), tested on REDS4
python main_test_vrt.py --task 001_VRT_videosr_bi_REDS_6frames --folder_lq testsets/REDS4/sharp_bicubic --folder_gt testsets/REDS4/GT --tile 40 128 128 --tile_overlap 2 20 20

# 002, video sr trained on REDS (16 frames), tested on REDS4
python main_test_vrt.py --task 002_VRT_videosr_bi_REDS_16frames --folder_lq testsets/REDS4/sharp_bicubic --folder_gt testsets/REDS4/GT --tile 40 128 128 --tile_overlap 2 20 20

# 003, video sr trained on Vimeo (bicubic), tested on Vid4 and Vimeo
python main_test_vrt.py --task 003_VRT_videosr_bi_Vimeo_7frames --folder_lq testsets/Vid4/BIx4 --folder_gt testsets/Vid4/GT --tile 32 128 128 --tile_overlap 2 20 20
python main_test_vrt.py --task 003_VRT_videosr_bi_Vimeo_7frames --folder_lq testsets/vimeo90k/vimeo_septuplet_matlabLRx4/sequences --folder_gt testsets/vimeo90k/vimeo_septuplet/sequences --tile 8 0 0 --tile_overlap 0 20 20

# 004, video sr trained on Vimeo (blur-downsampling), tested on Vid4, UDM10 and Vimeo
python main_test_vrt.py --task 004_VRT_videosr_bd_Vimeo_7frames --folder_lq testsets/Vid4/BDx4 --folder_gt testsets/Vid4/GT --tile 32 128 128 --tile_overlap 2 20 20
python main_test_vrt.py --task 004_VRT_videosr_bd_Vimeo_7frames --folder_lq testsets/UDM10/BDx4 --folder_gt testsets/UDM10/GT --tile 32 128 128 --tile_overlap 2 20 20
python main_test_vrt.py --task 004_VRT_videosr_bd_Vimeo_7frames --folder_lq testsets/vimeo90k/vimeo_septuplet_BDLRx4/sequences --folder_gt testsets/vimeo90k/vimeo_septuplet/sequences --tile 8 0 0 --tile_overlap 0 20 20

# 005, video deblurring trained and tested on DVD
python main_test_vrt.py --task 005_VRT_videodeblurring_DVD --folder_lq testsets/DVD10/test_GT_blurred --folder_gt testsets/DVD10/test_GT --tile 12 256 256 --tile_overlap 2 20 20

# 006, video deblurring trained and tested on GoPro
python main_test_vrt.py --task 006_VRT_videodeblurring_GoPro --folder_lq testsets/GoPro11/test_GT_blurred --folder_gt testsets/GoPro11/test_GT --tile 18 192 192 --tile_overlap 2 20 20

# 007, video deblurring trained on REDS, tested on REDS4
python main_test_vrt.py --task 007_VRT_videodeblurring_REDS --folder_lq testsets/REDS4/blur --folder_gt testsets/REDS4/GT --tile 12 256 256 --tile_overlap 2 20 20

# 008, video denoising trained on DAVIS (noise level 0-50) and tested on Set8 and DAVIS
python main_test_vrt.py --task 008_VRT_videodenoising_DAVIS --sigma 10 --folder_lq testsets/Set8 --folder_gt testsets/Set8 --tile 12 256 256 --tile_overlap 2 20 20
python main_test_vrt.py --task 008_VRT_videodenoising_DAVIS --sigma 10  --folder_lq testsets/DAVIS-test --folder_gt testsets/DAVIS-test --tile 12 256 256 --tile_overlap 2 20 20

All visual results of VRT can be downloaded here.

Training

The training and testing sets are as follows (see the supplementary for a detailed introduction of all datasets). For better I/O speed, use create_lmdb.py to convert .png datasets to .lmdb datasets.

Note: You do NOT need to prepare the datasets if you just want to test the model. main_test_vrt.py will download the testing set automaticaly.

Task Training Set Testing Set Pretrained Model and Visual Results of VRT
video SR (setting 1, BI) REDS sharp & sharp_bicubic (266 videos, 266000 frames: train + val except REDS4)

*Use regroup_reds_dataset.py to regroup and rename REDS val set
REDS4 (4 videos, 400 frames: 000, 011, 015, 020 of REDS) here
video SR (setting 2 & 3, BI & BD) Vimeo90K (64612 seven-frame videos as in sep_trainlist.txt)

* Use generate_LR_Vimeo90K.m and generate_LR_Vimeo90K_BD.m to generate LR frames for bicubic and blur-downsampling VSR, respectively.
Vimeo90K-T (the rest 7824 7-frame videos) + Vid4 (4 videos) + UDM10 (10 videos)

*Use prepare_UDM10.py to regroup and rename the UDM10 dataset
here
video deblurring (setting 1, motion blur) DVD (61 videos, 5708 frames)

*Use prepare_DVD.py to regroup and rename the dataset.
DVD (10 videos, 1000 frames)

*Use evaluate_video_deblurring.m for final evaluation.
here
video deblurring (setting 2, motion blur) GoPro (22 videos, 2103 frames)

*Use prepare_GoPro_as_video.py to regroup and rename the dataset.
GoPro (11 videos, 1111 frames)

*Use evaluate_video_deblurring.m for final evaluation.
here
video deblurring (setting 3, motion blur) REDS sharp & blur (266 videos, 266000 frames: train & val except REDS4)

*Use regroup_reds_dataset.py to regroup and rename REDS val set. Note that it shares the same HQ frames as in VSR.
REDS4 (4 videos, 400 frames: 000, 011, 015, 020 of REDS) here
video denoising (Gaussian noise) DAVIS-2017 (90 videos, 6208 frames)

*Use all files in DAVIS/JPEGImages/480p
DAVIS-2017-test (30 videos) + Set8 (8 videos: tractor, touchdown, park_joy and sunflower selected from DERF + hypersmooth, motorbike, rafting and snowboard from GOPRO_540P) here

The training code will be put in KAIR.

Results

We achieved state-of-the-art performance on video SR, video deblurring and video denoising. Detailed results can be found in the paper.

Video Super-Resolution (click me)

Video Deblurring

Video Denoising

Citation

@article{liang2022vrt,
    title={VRT: A Video Restoration Transformer},
    author={Liang, Jingyun and Cao, Jiezhang and Fan, Yuchen and Zhang, Kai and Ranjan, Rakesh and Li, Yawei and Timofte, Radu and Van Gool, Luc},
    journal={arXiv preprint arXiv:2108.10257},
    year={2022}
}

License and Acknowledgement

This project is released under the CC-BY-NC license. We refer to codes from KAIR, BasicSR, Video Swin Transformer and mmediting. Thanks for their awesome works. The majority of VRT is licensed under CC-BY-NC, however portions of the project are available under separate license terms: KAIR is licensed under the MIT License, BasicSR, Video Swin Transformer and mmediting are licensed under the Apache 2.0 license.

Comments
  • Problem of

    Problem of "use_checkpoint_attn".

    I try to reimplement the training part. But I encounter the following problem when I set the param 'use_checkpoint_attn' in self.residual_group2 as True. Could you provide solutions to solve it? 2022121123637_96838

    opened by wlj961012 8
  • RuntimeError expected input... to have 28 channels, but got 27 channels instead

    RuntimeError expected input... to have 28 channels, but got 27 channels instead

    I am getting this error on my own test data (with task 008_VRT_videodenoising_DAVIS)

    RuntimeError: Given groups=1, weight of size [96, 28, 1, 3, 3], expected input[1, 27, 32, 128, 128] to have 28 channels, but got 27 channels instead

    Full stack: File "C:\Dev\VRT\models\network_vrt.py", line 1395, in forward x = self.conv_first(x.transpose(1, 2)) File "C:\tools\miniconda3\envs\pt\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\tools\miniconda3\envs\pt\lib\site-packages\torch\nn\modules\conv.py", line 590, in forward return self._conv_forward(input, self.weight, self.bias) File "C:\tools\miniconda3\envs\pt\lib\site-packages\torch\nn\modules\conv.py", line 585, in _conv_forward return F.conv3d( RuntimeError: Given groups=1, weight of size [96, 28, 1, 3, 3], expected input[1, 27, 32, 128, 128] to have 28 channels, but got 27 channels instead

    opened by ialhashim 6
  • I tested video SR with SwinIR and VRT, but SwinIR perfroms better. Is it normal?

    I tested video SR with SwinIR and VRT, but SwinIR perfroms better. Is it normal?

    VRT testing command

    CUDA_VISIBLE_DEVICES=9 \
    python main_test_vrt.py --task 002_VRT_videosr_bi_REDS_16frames \
                            --folder_lq /home/liao/cjj/dataset/test/LR \
                            --folder_gt /home/liao/cjj/dataset/test/GT \
                            --tile 10 128 128 \
                            --tile_overlap 2 20 20
    

    SwinIR model: 001_classicalSR_DIV2K_s48w8_SwinIR-M_x4

    Video for testing: https://cowtransfer.com/s/1739646a86874e

    Result

    SwinIR |   |   |   |   |   -- | -- | -- | -- | -- | --   | 1 | 2 | 3 | 4 | 平均 PSNR | 26.9603 | 31.9831 | 33.0922 | 33.2781 | 31.32843 SSIM | 0.7353 | 0.9022 | 0.8842 | 0.9233 | 0.86125 VRT |   |   |   |   |     | 1 | 2 | 3 | 4 | 平均 PSNR | 26.7961 | 31.7153 | 30.7655 | 34.3461 | 30.90575 SSIM | 0.7272 | 0.8931 | 0.8724 | 0.9385 | 0.8578

    opened by cjj490168650 5
  • How to run inference on larger frames e.g. 360p?

    How to run inference on larger frames e.g. 360p?

    Hola! Thanks for the great work with VRT. I wanted to know if you have any tips and recommendations to how we can run your evaluation code against our own higher resolution frames. It seems from my tests that anything above 180p just runs OOM in a K80 (12G) and a T4 (16G) regardless of the tile size that I use for all models (REDS, Vimeo, etc.). Do you have any advice? Thanks!

    opened by machinelearnear 5
  • Same error, solution didn't work: RuntimeError expected input... to have 28 channels, but got 27 channels instead

    Same error, solution didn't work: RuntimeError expected input... to have 28 channels, but got 27 channels instead

    I ran into the same error as #14 , and verified that self.nonblind_denoising was set to True here, but still receive the error:

    line 585, in _conv_forward
        return F.conv3d(
    RuntimeError: Given groups=1, weight of size [96, 28, 1, 3, 3], expected input[1, 27, 40, 128, 128] to have 28 channels, but got 27 channels instead
    

    This is using the dataset VRT/testsets/REDS4/sharp_bicubic via the call python main_test_vrt.py --task 008_VRT_videodenoising_DAVIS --folder_lq testsets/REDS4/sharp_bicubic --tile 40 128 128 --tile_overlap 2 20 20. I ultimately want to run this on my own folder of PNGs from a video.

    opened by dkoslicki 4
  • Request training setting recommendation of ×4 VSR

    Request training setting recommendation of ×4 VSR

    If I only have 2/4 3090s and want to train a model for ×4 VSR, how can I set training parameters effectively? That is no OOM, no large performance drop, mild training time.

    For example, there are two parameters of using checkpoint to save Cuda memory, use_checkpoint_attn and use_checkpoint_ffn, which one is the most influence one for training time/memory consumption?

    Looking forward to your reply, thank you.

    opened by LuoXin-s 3
  • Testing fails in network_vrt.py @ get_flow_4frames, flows_forward[0].shape[1]

    Testing fails in network_vrt.py @ get_flow_4frames, flows_forward[0].shape[1]

    Hi, I've been trying to use this code in combination with the github://cszn/KAIR for training a VRT model using my data and a custom dataloader I wrote for my data. Unfortunately, I'm running into an error in the testing phase of the get_flow_4frames because the shape of the forward_flows[0] is: torch.Size([1, 0, 2, 64, 64])

    The X input into forward is: torch.Size([1, 1, 3, 64, 64]) The X input into get_flows is: torch.Size([1, 1, 3, 64, 64]) The X input into get_flow_2frames: torch.Size([1, 1, 3, 64, 64]) The forward_flows[0] is as previously specified: torch.Size([1, 0, 2, 64, 64])

    def get_flow_4frames(self, flows_forward, flows_backward):
            '''Get flow between t and t+2 from (t,t+1) and (t+1,t+2).'''
    
            # backward
            d = flows_forward[0].shape[1]
            flows_backward2 = []
            for flows in flows_backward:
                flow_list = []
                for i in range(d - 1, 0, -1):
                    flow_n1 = flows[:, i - 1, :, :, :]  # flow from i+1 to i
                    flow_n2 = flows[:, i, :, :, :]  # flow from i+2 to i+1
                    flow_list.insert(0, flow_n1 + flow_warp(flow_n2, flow_n1.permute(0, 2, 3, 1)))  # flow from i+2 to i
                if len(flow_list) != 0:
                    flows_backward2.append(torch.stack(flow_list, 1))
    

    The training is working without any issues.

    Is this the anticipated behavior within the code or is there something regarding the test settings that I'm missing?

    opened by amrosado 2
  • Memory consumption while training

    Memory consumption while training

    Hi, congrats on this cool work! I'm trying to train your model but I only have 2 A100 GPUs so the memory is limited, I wonder how much space do you need to train models like "003_VRT_videosr_bi_Vimeo_7frames.pth" and "006_VRT_videodeblurring_GoPro.pth"?

    opened by xg416 2
  • Few questions about paper 😸

    Few questions about paper 😸

    According to the paper "The runtime is 2.2s per frame on 1280×720 blurred videos". What gpu are you guys used to measure runtime?
    Also i have question about model size, Did you guys try smaller model sizes (popular in modern transformers something like VRT-S VRT-L with different parameter size etc.) or architecture is limited and don't coverage with custom sizes?

    And ofc. congrats on cool paper 📦

    opened by machineko 2
  • Torch.distributed.elastic.multiprocessing.api.SignalException: Process XXXX got signal :1

    Torch.distributed.elastic.multiprocessing.api.SignalException: Process XXXX got signal :1

    Hello, thank you for the code. I meet an error when I train with 005_train_vrt_videodeblurring_dvd.json

    Fix keys: ['spynet', 'deform'] for the first 20000 iters. Fix keys: ['spynet', 'deform'] for the first 20000 iters. 22-09-01 02:31:11.512 : <epoch: 0, iter: 400, lr:4.000e-04> G_loss: 7.544e-02 22-09-01 02:48:36.264 : <epoch: 0, iter: 600, lr:4.000e-04> G_loss: 1.637e-02 22-09-01 03:06:01.631 : <epoch: 0, iter: 800, lr:4.000e-04> G_loss: 7.941e-02 WARNING:torch.distributed.elastic.agent.server.api:Received 1 death signal, shutting down workers WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2704351 closing signal SIGHUP WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2704352 closing signal SIGHUP Traceback (most recent call last): File "/home/ET/huiyuxiang/miniconda3/envs/deblur/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/ET/huiyuxiang/miniconda3/envs/deblur/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/ET/huiyuxiang/miniconda3/envs/deblur/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in <module> main() File "/home/ET/huiyuxiang/miniconda3/envs/deblur/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/home/ET/huiyuxiang/miniconda3/envs/deblur/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/home/ET/huiyuxiang/miniconda3/envs/deblur/lib/python3.7/site-packages/torch/distributed/run.py", line 755, in run )(*cmd_args) File "/home/ET/huiyuxiang/miniconda3/envs/deblur/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/ET/huiyuxiang/miniconda3/envs/deblur/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 236, in launch_agent result = agent.run() File "/home/ET/huiyuxiang/miniconda3/envs/deblur/lib/python3.7/site-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper result = f(*args, **kwargs) File "/home/ET/huiyuxiang/miniconda3/envs/deblur/lib/python3.7/site-packages/torch/distributed/elastic/agent/server/api.py", line 709, in run result = self._invoke_run(role) File "/home/ET/huiyuxiang/miniconda3/envs/deblur/lib/python3.7/site-packages/torch/distributed/elastic/agent/server/api.py", line 850, in _invoke_run time.sleep(monitor_interval) File "/home/ET/huiyuxiang/miniconda3/envs/deblur/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 60, in _terminate_process_handler raise SignalException(f"Process {os.getpid()} got signal: {sigval}", sigval=sigval) torch.distributed.elastic.multiprocessing.api.SignalException: Process 2704341 got signal: 1

    and I use the python=3.7.13, pytorch=1.12.1

    opened by timerobin 1
  • VRT 2x upscale

    VRT 2x upscale

    I was wondering if the authors have any suggestions for finetuning the VRT model to do a 2x upscale instead of a 4x upscale. I removed some layers from the Upsample module to support 2x upscale, however the forward/backward pass is consuming too much VRAM. Which layers do you suggest to remove from the model to reduce the model complexity and also achieve good results for a 2x upscale?

    Currently, I have tried 2x upscale training with 1 GPU, batch size =1, low quality frames crop size = 64x64, and high quality frames crop size = 128x128. The maximum VRAM usage in the forward pass/backward pass is 23GB.

    opened by ArfaSaif 1
  • How can we denoise a video sequence with only lr input and no GT?

    How can we denoise a video sequence with only lr input and no GT?

    Great research, thank you! I want to test the denoising effect on my own dataset which has only noisy input and no corresponding GT, how can I do this? Thanks!

    opened by haikunzhang95 0
  • FileNotFoundError

    FileNotFoundError

    After 4 successfull cells,

    FileNotFoundError                         Traceback (most recent call last)
    [<ipython-input-5-b2c3ee9af109>](https://localhost:8080/#) in <module>
          4   os.remove(zip_filename)
          5 os.system(f"zip -r -j {zip_filename} results/*")
    ----> 6 files.download(zip_filename)
    
    [/usr/local/lib/python3.7/dist-packages/google/colab/files.py](https://localhost:8080/#) in download(filename)
        207   if not _os.path.exists(filename):
        208     msg = 'Cannot find file: {}'.format(filename)
    --> 209     raise FileNotFoundError(msg)  # pylint: disable=undefined-variable
        210 
        211   comm_manager = _IPython.get_ipython().kernel.comm_manager
    
    FileNotFoundError: Cannot find file: VRT_result.zip
    
    Screen Shot 2022-10-14 at 17 29 16
    opened by Drjacky 0
  • memory lack problem

    memory lack problem

    hi, i do to training VRT model currently. but reached to memory lack problem.

    i used to NVIDIA RTX 2080 Ti which device memory amount is 11GB for training. so i was adjusted batch-size from 8 to 4 and torch.utils.checkpoint used. (use_checkpoint_attn: true, use_checkpoint_ffn: true, gt_size=256)

    but arise to memory lack (cannot allocate memory).

    is it possible training using RTX 2080 Ti?

    opened by mapsosa84 0
  • Log Files from Training

    Log Files from Training

    Hello,

    Thank you for your awesome code!

    I am hoping you might open-source the log files you have from training. Maybe the training and validation loss as a function of epoch (and/or batch) with an estimate of the runtime?

    opened by gauenk 0
  • Inference Taking Forever

    Inference Taking Forever

    I am trying to deblur a 150 frames video using a machine having two NVIDIA RTX A5000 GPUs using the the GoPro delur model and I reduced the tile value. But this operation is taking forever. How to solve this ? Is NVIDIA RTX A5000 enough to make ineferece ?

    opened by pentanol2 1
Releases(v0.0)
Owner
Jingyun Liang
Image/Video Restoration. PhD Student at Computer Vision Lab, ETH Zurich.
Jingyun Liang
A few stylization coreML models that I've trained with CreateML

CoreML-StyleTransfer A few stylization coreML models that I've trained with CreateML You can open and use the .mlmodel files in the "models" folder in

Doron Adler 8 Aug 18, 2022
Language Models for the legal domain in Spanish done @ BSC-TEMU within the "Plan de las Tecnologías del Lenguaje" (Plan-TL).

Spanish legal domain Language Model ⚖️ This repository contains the page for two main resources for the Spanish legal domain: A RoBERTa model: https:/

Plan de Tecnologías del Lenguaje - Gobierno de España 12 Nov 14, 2022
PartImageNet is a large, high-quality dataset with part segmentation annotations

PartImageNet: A Large, High-Quality Dataset of Parts We will release our dataset and scripts soon after cleaning and approval. Introduction PartImageN

Ju He 77 Nov 30, 2022
This repository implements WGAN_GP.

Image_WGAN_GP This repository implements WGAN_GP. Image_WGAN_GP This repository uses wgan to generate mnist and fashionmnist pictures. Firstly, you ca

Lieon 6 Dec 10, 2021
Official implementation for Scale-Aware Neural Architecture Search for Multivariate Time Series Forecasting

1 SNAS4MTF This repo is the official implementation for Scale-Aware Neural Architecture Search for Multivariate Time Series Forecasting. 1.1 The frame

SZJ 5 Sep 21, 2022
My usage of Real-ESRGAN to upscale anime, some test and results in the test_img folder

anime upscaler My usage of Real-ESRGAN to upscale anime, I hope to use this on a proper GPU cuz doing this on CPU is completely shit 😂 , I even tried

Shangar Muhunthan 29 Jan 07, 2023
A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis

WaveGlow A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis Quick Start: Install requirements: pip install

Yuchao Zhang 204 Jul 14, 2022
Video Background Music Generation with Controllable Music Transformer (ACM MM 2021 Oral)

CMT Code for paper Video Background Music Generation with Controllable Music Transformer (ACM MM 2021 Best Paper Award) [Paper] [Site] Directory Struc

Zhaokai Wang 198 Dec 27, 2022
Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021)

Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021, official Pytorch implementatio

Microsoft 247 Dec 25, 2022
Objax Apache-2Objax (🥉19 · ⭐ 580) - Objax is a machine learning framework that provides an Object.. Apache-2 jax

Objax Tutorials | Install | Documentation | Philosophy This is not an officially supported Google product. Objax is an open source machine learning fr

Google 729 Jan 02, 2023
Winning solution of the Indoor Location & Navigation Kaggle competition

This repository contains the code to generate the winning solution of the Kaggle competition on indoor location and navigation organized by Microsoft

Tom Van de Wiele 62 Dec 28, 2022
Code for the paper "On the Power of Edge Independent Graph Models"

Edge Independent Graph Models Code for the paper: "On the Power of Edge Independent Graph Models" Sudhanshu Chanpuriya, Cameron Musco, Konstantinos So

Konstantinos Sotiropoulos 0 Oct 26, 2021
A PyTorch Implementation of "Neural Arithmetic Logic Units"

Neural Arithmetic Logic Units [WIP] This is a PyTorch implementation of Neural Arithmetic Logic Units by Andrew Trask, Felix Hill, Scott Reed, Jack Ra

Kevin Zakka 181 Nov 18, 2022
LiDAR R-CNN: An Efficient and Universal 3D Object Detector

LiDAR R-CNN: An Efficient and Universal 3D Object Detector Introduction This is the official code of LiDAR R-CNN: An Efficient and Universal 3D Object

TuSimple 295 Jan 05, 2023
moving object detection for satellite videos.

DSFNet: Dynamic and Static Fusion Network for Moving Object Detection in Satellite Videos Algorithm Introduction DSFNet: Dynamic and Static Fusion Net

xiaochao 39 Dec 16, 2022
AI Virtual Calculator: This is a simple virtual calculator based on Artificial intelligence.

AI Virtual Calculator: This is a simple virtual calculator that works with gestures using OpenCV. We will use our hand in the air to click on the calc

Md. Rakibul Islam 1 Jan 13, 2022
Project page for our ICCV 2021 paper "The Way to my Heart is through Contrastive Learning"

The Way to my Heart is through Contrastive Learning: Remote Photoplethysmography from Unlabelled Video This is the official project page of our ICCV 2

36 Jan 06, 2023
PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

五维空间 140 Nov 23, 2022
pytorch implementation of dftd2 & dftd3

torch-dftd pytorch implementation of dftd2 [1] & dftd3 [2, 3] Install # Install from pypi pip install torch-dftd # Install from source (for developer

33 Nov 28, 2022
MutualGuide is a compact object detector specially designed for embedded devices

Introduction MutualGuide is a compact object detector specially designed for embedded devices. Comparing to existing detectors, this repo contains two

ZHANG Heng 103 Dec 13, 2022