BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

Overview

BasicVSR++

BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

Ported from https://github.com/open-mmlab/mmediting

Dependencies

Installing mmcv-full on Windows is a bit complicated as it requires Visual Studio and other tools to compile CUDA ops. So I have uploaded the built file compiled with CUDA 11.1 for Windows users and you can install it by executing the following command.

pip install https://github.com/HolyWu/vs-basicvsrpp/releases/download/v1.0.0/mmcv_full-1.3.12-cp39-cp39-win_amd64.whl

Installation

pip install --upgrade vsbasicvsrpp
python -m vsbasicvsrpp

Usage

from vsbasicvsrpp import BasicVSRPP

ret = BasicVSRPP(clip)

See __init__.py for the description of the parameters.

Comments
  • Question about the tiling,...

    Question about the tiling,...

    I got a Geforce GTX 1070ti with 8 GB of vram. (I know it's not new and really that suited for this, but that's what I got. :)) If I crop my source to a 480x480 chunk and run BasicVSR++ on it ~1.4-3.5GB VRAM are used. Not cropping my source, I thought that even with some padding I thought that using "BasicVSRPP(clip=clip, model=5, tile_x=480, tile_y=480)" would allow me to filter HD an UHD clips which it does not. Question is: Why? Shouldn't this work with a 480x480 tiling?

    What are the min tile with and height sizes (I assumed it would be 64*4=256 + padding, but when using 320x320 I get: Python exception: Analyse: failed to retrieve first frame from super clip. Error message: The height and width of low-res inputs must be at least 64, but got 84 and 44. Using 392x392 I get The height and width of low-res inputs must be at least 64, but got 102 and 26. ->Just from testing I don't get what the tiling does at all. :)

    opened by Selur 5
  • Question regarding some Arguments

    Question regarding some Arguments

    Hi, My appologies, I'm not sure if I forgot to submit my last typed issue or if it was removed. If it was removed, feel free to close this without a comment. I typed it this morning, and honestly, can't remember if I submitted or closed the browser window by accident.

    I have a few questions for some of the arguments one can set, as those are a bit different from the mmediting interface, if I have seen that correctly.

    Interval: This specifies the number of Images per Batch, is that correct? In addition of reducing the VRam footprint, I assume it influences what the network can see during the upscale process? The smaller the batch, the smaller the window it can include in the temporal calculations?

    Tiling: This splits the image into tiles for processing instead of the whole image, is that correct? What is the purpose or best case scenario someone would use this for?

    FP16: I run the network with FP16 at the moment, as FP32 usually blows up my 11GB of VRam if I don't reduce the Interval. As I had not enough time to finish all my Tests yesterday, have you noticed any degradation in quality in Images when FP16 is used? If I want to run it with FP32, I need to reduce the Interval but if my assumption is correct, this narrows down the time-window the network can calculate across. So I'm in between Precision vs Intervall-Size for VRam usage. Do you have any experience in this and what a good tradeoff might look like.

    opened by Memnarch 4
  • mmcv-full on Windows and Python 3.10

    mmcv-full on Windows and Python 3.10

    Since Vapoursynth RC58 either need Python 3.8 (Win7 compatible) or Python 3.10 (which I'm using), the current mmcv_full-1.3.16-cp39-cp39-win_amd64.whl is not a supported wheel since it's meant for Python 3.9. Would be nice if you could create a mmcv-full for Python 3.10 in Windows. Thanks!

    opened by Selur 2
  • Sizes of tensors must match except in dimension 3. Got 213 and 214 (The offending index is 0)

    Sizes of tensors must match except in dimension 3. Got 213 and 214 (The offending index is 0)

    python 3.9.7 torch 1.9.1 mmcv 1.3.13/1.4.4 vs r57 vs fatpack

    import vapoursynth as vs core = vs.core

    from vsbasicvsrpp import BasicVSRPP

    video = core.ffms2.Source(source=r'D:\winpython\VapourSynth64Portable\in.mp4')

    video = core.resize.Bicubic(clip=video, format=vs.RGBS, matrix_in_s="709")

    video = BasicVSRPP(clip=video, interval=10, model=5, fp16=True, tile_pad=0)

    video = core.resize.Bicubic(video, format=vs.YUV420P8, matrix_s="709")

    video.set_output()

    opened by oblessnoob 2
  • meshgrid() got an unexpected keyword argument 'indexing'

    meshgrid() got an unexpected keyword argument 'indexing'

    Using the v1.4.0 and: clip = BasicVSRPP(clip=clip, model=3, tile_x=352, tile_y=480, fp16=True) full script:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    # Loading Plugins
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # source: 'C:\Users\Selur\Desktop\VTS_02_1-Sample-Beginning.demuxed.m2v'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
    # Loading C:\Users\Selur\Desktop\VTS_02_1-Sample-Beginning.demuxed.m2v using D2VSource
    clip = core.d2v.Source(input="E:/Temp/m2v_154f3f0f52f994b09117b9c8650e17d2_853323747.d2v")
    # making sure input color matrix is set as 470bg
    clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited")
    # making sure frame rate is set to 29.97
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip)# new fps: 23.976
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    # DEBUG: vsTIVTC changed scanorder to: progressive
    # cropping the video to 704x480
    clip = core.std.CropRel(clip=clip, left=6, right=10, top=0, bottom=0)
    # adjusting color space from YUV420P8 to RGBS for vsBasicVSRPPFilter
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # Quality enhancement using BasicVSR++
    from vsbasicvsrpp import BasicVSRPP
    clip = BasicVSRPP(clip=clip, model=3, tile_x=352, tile_y=480, fp16=True)
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
    # set output frame rate to 23.976fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
    # Output
    clip.set_output()
    

    I get:

    Error on frame 0 request:
    meshgrid() got an unexpected keyword argument 'indexing'
    
    opened by Selur 2
  • ImportError: DLL load failed while importing _ext: The specified procedure could not be found.

    ImportError: DLL load failed while importing _ext: The specified procedure could not be found.

    Using Python 3.9.7, I'm having the following issue when I try to finish installing vsbasicvsrpp with python -m vsbasicvsrpp.

    I had installed it just fine weeks ago, but I attempted to update PyTorch last night with: pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html and that seemed to break everything, so I managed to fix it, but vsbasicvsrpp remains unfixed due to the following.

    (c) Microsoft Corporation. All rights reserved.
    
    C:\WINDOWS\system32>pip install --upgrade vsbasicvsrpp
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    Collecting vsbasicvsrpp
      Using cached vsbasicvsrpp-1.3.0-py3-none-any.whl (21 kB)
    Requirement already satisfied: torchvision in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (0.11.1+cu113)
    Requirement already satisfied: mmcv-full>=1.3.13 in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (1.3.14)
    Requirement already satisfied: torch>=1.9.0 in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (1.10.0+cu113)
    Requirement already satisfied: numpy in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (1.21.3)
    Requirement already satisfied: Pillow in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (8.3.1)
    Requirement already satisfied: packaging in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (21.0)
    Requirement already satisfied: addict in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2.4.0)
    Requirement already satisfied: yapf in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (0.31.0)
    Requirement already satisfied: regex in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2021.8.28)
    Requirement already satisfied: pyyaml in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (5.4.1)
    Requirement already satisfied: typing-extensions in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from torch>=1.9.0->vsbasicvsrpp) (3.10.0.0)
    Requirement already satisfied: pyparsing>=2.0.2 in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from packaging->mmcv-full>=1.3.13->vsbasicvsrpp) (2.4.7)
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    Installing collected packages: vsbasicvsrpp
    Successfully installed vsbasicvsrpp-1.3.0
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    
    C:\WINDOWS\system32>python -m vsbasicvsrpp
    Traceback (most recent call last):
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 188, in _run_module_as_main
        mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 147, in _get_module_details
        return _get_module_details(pkg_main_name, error)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 111, in _get_module_details
        __import__(pkg_name)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\vsbasicvsrpp\__init__.py", line 10, in <module>
        from .basicvsr_pp import BasicVSRPlusPlus
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\vsbasicvsrpp\basicvsr_pp.py", line 8, in <module>
        from mmcv.ops import ModulatedDeformConv2d, modulated_deform_conv2d
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\mmcv\ops\__init__.py", line 2, in <module>
        from .ball_query import ball_query
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\mmcv\ops\ball_query.py", line 7, in <module>
        ext_module = ext_loader.load_ext('_ext', ['ball_query_forward'])
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\mmcv\utils\ext_loader.py", line 13, in load_ext
        ext = importlib.import_module('mmcv.' + name)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\importlib\__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
    ImportError: DLL load failed while importing _ext: The specified procedure could not be found.
    opened by AIisCool 2
  • Adjust strength?

    Adjust strength?

    First off I wish to say thank you very much for all the work you do HolyWu!

    I love how much noise vs-basicvsrpp removes from the video, but sometimes it's a little too much (ie. removes something that isn't noise at all). Is there any way to adjust the strength of it similar to DPIR?

    Thank you.

    opened by AIisCool 2
  • Model 5 seems to downscale by 4

    Model 5 seems to downscale by 4

    Hi, While I got good results, they did not seem to be quite as sharp as I expected them. When I tried to process a 136x264 Video, I suddenly got the error:

    Traceback (most recent call last):
      File "E:\Git\RivenTools\Reproduce.py", line 26, in <module>
        video.output(f, y4m=True)
      File "src\cython\vapoursynth.pyx", line 1790, in vapoursynth.VideoNode.output
      File "src\cython\vapoursynth.pyx", line 1655, in frames
      File "D:\Program Files\Python39\lib\concurrent\futures\_base.py", line 445, in result
        return self.__get_result()
      File "D:\Program Files\Python39\lib\concurrent\futures\_base.py", line 390, in __get_result
        raise self._exception
    vapoursynth.Error: The height and width of low-res inputs must be at least 64, but got 66 and 34.
    

    But my Video is larger. Looking into it, executing Model 5 downscales the video. Or at least its internal data seems to be downscaled. While the returned clip reports the correct size, the frames seem to be lower. A quarter to be precise.

    This seems to be the code causing it: https://github.com/HolyWu/vs-basicvsrpp/blob/da066461f66c6e7deedf354630899b815393836b/vsbasicvsrpp/basicvsr_pp.py#L291

    And this is the script to reproduce it (It's a trimmed down version, that's why model 5 and 1 are executed directly after one another. My pipeline has some steps inbetween): Reproduce.zip

    If you need that specific video from my script, I can share that with you but would prefer to do that outside of this report, as it's a game asset.

    opened by Memnarch 2
  • got a warning,...

    got a warning,...

    Running vs-basicvsrpp I get:

    I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\nn\functional.py:3657: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
      warnings.warn(
    

    should I do something or should I ignore this?

    opened by Selur 2
  • calling

    calling "python -m vsbasicvsrpp" fails

    Using a portable Vapoursynth R58 and calling python -m pip install --upgrade vsbasicvsrpp gives me

    Requirement already satisfied: vsbasicvsrpp in i:\hybrid\64bit\vapoursynth\lib\site-packages (1.4.1)
    Requirement already satisfied: torchvision in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (0.12.0)
    Requirement already satisfied: mmcv-full>=1.3.13 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (1.5.0)
    Requirement already satisfied: numpy in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (1.22.3)
    Requirement already satisfied: torch>=1.9.0 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (1.11.0)
    Requirement already satisfied: regex in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2022.3.15)
    Requirement already satisfied: pyyaml in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (6.0)
    Requirement already satisfied: packaging in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (21.3)
    Requirement already satisfied: Pillow in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (9.1.0)
    Requirement already satisfied: addict in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2.4.0)
    Requirement already satisfied: yapf in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (0.32.0)
    Requirement already satisfied: typing-extensions in i:\hybrid\64bit\vapoursynth\lib\site-packages (from torch>=1.9.0->vsbasicvsrpp) (4.2.0)
    Requirement already satisfied: requests in i:\hybrid\64bit\vapoursynth\lib\site-packages (from torchvision->vsbasicvsrpp) (2.27.1)
    Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from packaging->mmcv-full>=1.3.13->vsbasicvsrpp) (3.0.8)
    Requirement already satisfied: idna<4,>=2.5 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (3.3)
    Requirement already satisfied: certifi>=2017.4.17 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (2021.10.8)
    Requirement already satisfied: urllib3<1.27,>=1.21.1 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (1.26.9)
    Requirement already satisfied: charset-normalizer~=2.0.0 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (2.0.12)
    

    which looks fine to me. Problem is, calling python -m vsbasicvsrpp gives me:

    Traceback (most recent call last):
      File "runpy.py", line 187, in _run_module_as_main
      File "runpy.py", line 146, in _get_module_details
      File "runpy.py", line 110, in _get_module_details
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsbasicvsrpp\__init__.py", line 10, in <module>
        from .basicvsr_pp import BasicVSRPlusPlus
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsbasicvsrpp\basicvsr_pp.py", line 8, in <module>
        from mmcv.ops import ModulatedDeformConv2d, modulated_deform_conv2d
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\mmcv\ops\__init__.py", line 2, in <module>
        from .active_rotated_filter import active_rotated_filter
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\mmcv\ops\active_rotated_filter.py", line 8, in <module>
        ext_module = ext_loader.load_ext(
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\mmcv\utils\ext_loader.py", line 13, in load_ext
        ext = importlib.import_module('mmcv.' + name)
      File "importlib\__init__.py", line 126, in import_module
    

    any idea what I'm doing wrong / where the problem is?

    Calling python -m pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11/index.html gives me:

    Looking in links: https://download.openmmlab.com/mmcv/dist/cu113/torch1.11/index.html
    Requirement already satisfied: mmcv-full in i:\hybrid\64bit\vapoursynth\lib\site-packages (1.5.0)
    Requirement already satisfied: pyyaml in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (6.0)
    Requirement already satisfied: numpy in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (1.22.3)
    Requirement already satisfied: addict in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (2.4.0)
    Requirement already satisfied: Pillow in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (9.1.0)
    Requirement already satisfied: yapf in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (0.32.0)
    Requirement already satisfied: regex in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (2022.3.15)
    Requirement already satisfied: packaging in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (21.3)
    Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from packaging->mmcv-full) (3.0.8)
    

    which seems fine too, to me.

    Cu Selur

    opened by Selur 1
  • NTIRE 2021 Video Super-Resolution

    NTIRE 2021 Video Super-Resolution

    opened by AIisCool 1
  • Multi-gpu support?

    Multi-gpu support?

    I tried to utilize 4 gpus for inference with the following code but it didn't work. Only one of the gpu was doing the job at a time and the others were idling. Are there any suggested ways for multi-gpu inference?

    import vapoursynth as vs
    import os
    from vsbasicvsrpp import BasicVSRPP
    core = vs.core
    
    folder = r'C:\Users\test\Desktop\vs-58'
    file = r'test.m4v'
    
    src = os.path.join(folder, file)
    
    src = core.ffms2.Source(src)
    
    src = core.fmtc.resample (clip=src, css="444")
    src = core.fmtc.matrix (clip=src, mat="709", col_fam=vs.RGB)
    src = core.fmtc.bitdepth (clip=src, bits=32)
    
    interval = 180
    n = 4
    
    add = (interval*n) - len(src) % (interval*n)
    if add>0:
    	src = src + core.std.BlankClip(src, length=add)
    
    c1 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*0, interval)])
    c1 = BasicVSRPP(c1, model=5, interval=interval, device_index=0, fp16=True)
    
    c2 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*1, interval*2)])
    c2 = BasicVSRPP(c2, model=5, interval=interval, device_index=1, fp16=True)
    
    c3 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*2, interval*3)])
    c3 = BasicVSRPP(c3, model=5, interval=interval, device_index=2, fp16=True)
    
    c4 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*3, interval*4)])
    c4 = BasicVSRPP(c4, model=5, interval=interval, device_index=3, fp16=True)
    
    
    c = core.std.Interleave(clips=[c1, c2, c3, c4])
    
    a = [i for i in range(interval*n) if i % n ==0] + [i for i in range(interval*n) if i % n ==1] + [i for i in range(interval*n) if i % n ==2] + [i for i in range(interval*n) if i % n ==3]
    
    c = core.std.SelectEvery(clip=c, cycle=interval*n, offsets=a)
    
    c = core.fmtc.matrix (clip=c, mat="709", col_fam=vs.YUV)
    c = core.fmtc.resample (clip=c, css="420")
    c = core.fmtc.bitdepth(clip =c, bits=16)   
    
    if add>0:
    	c = c[:-add]
    
    c.set_output()
    
    opened by Bouby308 0
  • deepfillv2 support

    deepfillv2 support

    thanks for your effort:smile: At present, there is't a good inpaint method in vs. i notice mmediting also support deepfillv2, how about port it to vs (it performs good for image, maybe temporal flicker for video)

    opened by soldivelot 0
Releases(v1.4.1)
Owner
Holy Wu
Holy Wu
Unofficial PyTorch implementation of Google AI's VoiceFilter system

VoiceFilter Note from Seung-won (2020.10.25) Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-sour

MINDs Lab 883 Jan 07, 2023
Towards the D-Optimal Online Experiment Design for Recommender Selection (KDD 2021)

Towards the D-Optimal Online Experiment Design for Recommender Selection (KDD 2021) Contact 0 Jan 11, 2022

Official implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification

CrossViT This repository is the official implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification. ArXiv If

International Business Machines 168 Dec 29, 2022
6D Grasping Policy for Point Clouds

GA-DDPG [website, paper] Installation git clone https://github.com/liruiw/GA-DDPG.git --recursive Setup: Ubuntu 16.04 or above, CUDA 10.0 or above, py

Lirui Wang 48 Dec 21, 2022
A small library for creating and manipulating custom JAX Pytree classes

Treeo A small library for creating and manipulating custom JAX Pytree classes Light-weight: has no dependencies other than jax. Compatible: Treeo Tree

Cristian Garcia 58 Nov 23, 2022
Minimal But Practical Image Classifier Pipline Using Pytorch, Finetune on ResNet18, Got 99% Accuracy on Own Small Datasets.

PyTorch Image Classifier Updates As for many users request, I released a new version of standared pytorch immage classification example at here: http:

JinTian 106 Nov 06, 2022
Gender Classification Machine Learning Model using Sk-learn in Python with 97%+ accuracy and deployment

Gender-classification This is a ML model to classify Male and Females using some physical characterstics Data. Python Libraries like Pandas,Numpy and

Aryan raj 11 Oct 16, 2022
An educational tool to introduce AI planning concepts using mobile manipulator robots.

JEDAI Explains Decision-Making AI Virtual Machine Image The recommended way of using JEDAI is to use pre-configured Virtual Machine image that is avai

Autonomous Agents and Intelligent Robots 13 Nov 15, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Vowpal Wabbit 8.1k Jan 06, 2023
K-Means Clustering and Hierarchical Clustering Unsupervised Learning Solution in Python3.

Unsupervised Learning - K-Means Clustering and Hierarchical Clustering - The Heritage Foundation's Economic Freedom Index Analysis 2019 - By David Sal

David Salako 1 Jan 12, 2022
This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian Sign Language.

LIBRAS-Image-Classifier This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian

Aryclenio Xavier Barros 26 Oct 14, 2022
Keras-1D-NN-Classifier

Keras-1D-NN-Classifier This code is based on the reference codes linked below. reference 1, reference 2 This code is for 1-D array data classification

Jae-Hoon Shim 6 May 18, 2021
Efficient Householder transformation in PyTorch

Efficient Householder Transformation in PyTorch This repository implements the Householder transformation algorithm for calculating orthogonal matrice

Anton Obukhov 49 Nov 20, 2022
Modeling Temporal Concept Receptive Field Dynamically for Untrimmed Video Analysis

Modeling Temporal Concept Receptive Field Dynamically for Untrimmed Video Analysis This is a PyTorch implementation of the model described in our pape

qzhb 6 Jul 08, 2021
Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark (ICCV 2021)

Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark (ICCV 2021) Kun Wang, Zhenyu Zhang, Zhiqiang Yan, X

kunwang 66 Nov 24, 2022
The implementation of the algorithm in the paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020.

DS3L This is the code for paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020. Setups The code is implem

Guolz 36 Oct 19, 2022
Enhancing Knowledge Tracing via Adversarial Training

Enhancing Knowledge Tracing via Adversarial Training This repository contains source code for the paper "Enhancing Knowledge Tracing via Adversarial T

Xiaopeng Guo 14 Oct 24, 2022
Categorizing comments on YouTube into different categories.

Youtube Comments Categorization This repo is for categorizing comments on a youtube video into different categories. negative (grievances, complaints,

Rhitik 5 Nov 26, 2022
BC3407-Group-5-Project - BC3407 Group Project With Python

BC3407-Group-5-Project As the world struggles to contain the ever-changing varia

1 Jan 26, 2022
PyTorch implementation of spectral graph ConvNets, NIPS’16

Graph ConvNets in PyTorch October 15, 2017 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbresson

Xavier Bresson 287 Jan 04, 2023