BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

Overview

BasicVSR++

BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

Ported from https://github.com/open-mmlab/mmediting

Dependencies

Installing mmcv-full on Windows is a bit complicated as it requires Visual Studio and other tools to compile CUDA ops. So I have uploaded the built file compiled with CUDA 11.1 for Windows users and you can install it by executing the following command.

pip install https://github.com/HolyWu/vs-basicvsrpp/releases/download/v1.0.0/mmcv_full-1.3.12-cp39-cp39-win_amd64.whl

Installation

pip install --upgrade vsbasicvsrpp
python -m vsbasicvsrpp

Usage

from vsbasicvsrpp import BasicVSRPP

ret = BasicVSRPP(clip)

See __init__.py for the description of the parameters.

Comments
  • Question about the tiling,...

    Question about the tiling,...

    I got a Geforce GTX 1070ti with 8 GB of vram. (I know it's not new and really that suited for this, but that's what I got. :)) If I crop my source to a 480x480 chunk and run BasicVSR++ on it ~1.4-3.5GB VRAM are used. Not cropping my source, I thought that even with some padding I thought that using "BasicVSRPP(clip=clip, model=5, tile_x=480, tile_y=480)" would allow me to filter HD an UHD clips which it does not. Question is: Why? Shouldn't this work with a 480x480 tiling?

    What are the min tile with and height sizes (I assumed it would be 64*4=256 + padding, but when using 320x320 I get: Python exception: Analyse: failed to retrieve first frame from super clip. Error message: The height and width of low-res inputs must be at least 64, but got 84 and 44. Using 392x392 I get The height and width of low-res inputs must be at least 64, but got 102 and 26. ->Just from testing I don't get what the tiling does at all. :)

    opened by Selur 5
  • Question regarding some Arguments

    Question regarding some Arguments

    Hi, My appologies, I'm not sure if I forgot to submit my last typed issue or if it was removed. If it was removed, feel free to close this without a comment. I typed it this morning, and honestly, can't remember if I submitted or closed the browser window by accident.

    I have a few questions for some of the arguments one can set, as those are a bit different from the mmediting interface, if I have seen that correctly.

    Interval: This specifies the number of Images per Batch, is that correct? In addition of reducing the VRam footprint, I assume it influences what the network can see during the upscale process? The smaller the batch, the smaller the window it can include in the temporal calculations?

    Tiling: This splits the image into tiles for processing instead of the whole image, is that correct? What is the purpose or best case scenario someone would use this for?

    FP16: I run the network with FP16 at the moment, as FP32 usually blows up my 11GB of VRam if I don't reduce the Interval. As I had not enough time to finish all my Tests yesterday, have you noticed any degradation in quality in Images when FP16 is used? If I want to run it with FP32, I need to reduce the Interval but if my assumption is correct, this narrows down the time-window the network can calculate across. So I'm in between Precision vs Intervall-Size for VRam usage. Do you have any experience in this and what a good tradeoff might look like.

    opened by Memnarch 4
  • mmcv-full on Windows and Python 3.10

    mmcv-full on Windows and Python 3.10

    Since Vapoursynth RC58 either need Python 3.8 (Win7 compatible) or Python 3.10 (which I'm using), the current mmcv_full-1.3.16-cp39-cp39-win_amd64.whl is not a supported wheel since it's meant for Python 3.9. Would be nice if you could create a mmcv-full for Python 3.10 in Windows. Thanks!

    opened by Selur 2
  • Sizes of tensors must match except in dimension 3. Got 213 and 214 (The offending index is 0)

    Sizes of tensors must match except in dimension 3. Got 213 and 214 (The offending index is 0)

    python 3.9.7 torch 1.9.1 mmcv 1.3.13/1.4.4 vs r57 vs fatpack

    import vapoursynth as vs core = vs.core

    from vsbasicvsrpp import BasicVSRPP

    video = core.ffms2.Source(source=r'D:\winpython\VapourSynth64Portable\in.mp4')

    video = core.resize.Bicubic(clip=video, format=vs.RGBS, matrix_in_s="709")

    video = BasicVSRPP(clip=video, interval=10, model=5, fp16=True, tile_pad=0)

    video = core.resize.Bicubic(video, format=vs.YUV420P8, matrix_s="709")

    video.set_output()

    opened by oblessnoob 2
  • meshgrid() got an unexpected keyword argument 'indexing'

    meshgrid() got an unexpected keyword argument 'indexing'

    Using the v1.4.0 and: clip = BasicVSRPP(clip=clip, model=3, tile_x=352, tile_y=480, fp16=True) full script:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    # Loading Plugins
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # source: 'C:\Users\Selur\Desktop\VTS_02_1-Sample-Beginning.demuxed.m2v'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
    # Loading C:\Users\Selur\Desktop\VTS_02_1-Sample-Beginning.demuxed.m2v using D2VSource
    clip = core.d2v.Source(input="E:/Temp/m2v_154f3f0f52f994b09117b9c8650e17d2_853323747.d2v")
    # making sure input color matrix is set as 470bg
    clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited")
    # making sure frame rate is set to 29.97
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip)# new fps: 23.976
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    # DEBUG: vsTIVTC changed scanorder to: progressive
    # cropping the video to 704x480
    clip = core.std.CropRel(clip=clip, left=6, right=10, top=0, bottom=0)
    # adjusting color space from YUV420P8 to RGBS for vsBasicVSRPPFilter
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # Quality enhancement using BasicVSR++
    from vsbasicvsrpp import BasicVSRPP
    clip = BasicVSRPP(clip=clip, model=3, tile_x=352, tile_y=480, fp16=True)
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
    # set output frame rate to 23.976fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
    # Output
    clip.set_output()
    

    I get:

    Error on frame 0 request:
    meshgrid() got an unexpected keyword argument 'indexing'
    
    opened by Selur 2
  • ImportError: DLL load failed while importing _ext: The specified procedure could not be found.

    ImportError: DLL load failed while importing _ext: The specified procedure could not be found.

    Using Python 3.9.7, I'm having the following issue when I try to finish installing vsbasicvsrpp with python -m vsbasicvsrpp.

    I had installed it just fine weeks ago, but I attempted to update PyTorch last night with: pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html and that seemed to break everything, so I managed to fix it, but vsbasicvsrpp remains unfixed due to the following.

    (c) Microsoft Corporation. All rights reserved.
    
    C:\WINDOWS\system32>pip install --upgrade vsbasicvsrpp
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    Collecting vsbasicvsrpp
      Using cached vsbasicvsrpp-1.3.0-py3-none-any.whl (21 kB)
    Requirement already satisfied: torchvision in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (0.11.1+cu113)
    Requirement already satisfied: mmcv-full>=1.3.13 in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (1.3.14)
    Requirement already satisfied: torch>=1.9.0 in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (1.10.0+cu113)
    Requirement already satisfied: numpy in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (1.21.3)
    Requirement already satisfied: Pillow in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (8.3.1)
    Requirement already satisfied: packaging in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (21.0)
    Requirement already satisfied: addict in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2.4.0)
    Requirement already satisfied: yapf in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (0.31.0)
    Requirement already satisfied: regex in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2021.8.28)
    Requirement already satisfied: pyyaml in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (5.4.1)
    Requirement already satisfied: typing-extensions in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from torch>=1.9.0->vsbasicvsrpp) (3.10.0.0)
    Requirement already satisfied: pyparsing>=2.0.2 in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from packaging->mmcv-full>=1.3.13->vsbasicvsrpp) (2.4.7)
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    Installing collected packages: vsbasicvsrpp
    Successfully installed vsbasicvsrpp-1.3.0
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    
    C:\WINDOWS\system32>python -m vsbasicvsrpp
    Traceback (most recent call last):
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 188, in _run_module_as_main
        mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 147, in _get_module_details
        return _get_module_details(pkg_main_name, error)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 111, in _get_module_details
        __import__(pkg_name)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\vsbasicvsrpp\__init__.py", line 10, in <module>
        from .basicvsr_pp import BasicVSRPlusPlus
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\vsbasicvsrpp\basicvsr_pp.py", line 8, in <module>
        from mmcv.ops import ModulatedDeformConv2d, modulated_deform_conv2d
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\mmcv\ops\__init__.py", line 2, in <module>
        from .ball_query import ball_query
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\mmcv\ops\ball_query.py", line 7, in <module>
        ext_module = ext_loader.load_ext('_ext', ['ball_query_forward'])
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\mmcv\utils\ext_loader.py", line 13, in load_ext
        ext = importlib.import_module('mmcv.' + name)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\importlib\__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
    ImportError: DLL load failed while importing _ext: The specified procedure could not be found.
    opened by AIisCool 2
  • Adjust strength?

    Adjust strength?

    First off I wish to say thank you very much for all the work you do HolyWu!

    I love how much noise vs-basicvsrpp removes from the video, but sometimes it's a little too much (ie. removes something that isn't noise at all). Is there any way to adjust the strength of it similar to DPIR?

    Thank you.

    opened by AIisCool 2
  • Model 5 seems to downscale by 4

    Model 5 seems to downscale by 4

    Hi, While I got good results, they did not seem to be quite as sharp as I expected them. When I tried to process a 136x264 Video, I suddenly got the error:

    Traceback (most recent call last):
      File "E:\Git\RivenTools\Reproduce.py", line 26, in <module>
        video.output(f, y4m=True)
      File "src\cython\vapoursynth.pyx", line 1790, in vapoursynth.VideoNode.output
      File "src\cython\vapoursynth.pyx", line 1655, in frames
      File "D:\Program Files\Python39\lib\concurrent\futures\_base.py", line 445, in result
        return self.__get_result()
      File "D:\Program Files\Python39\lib\concurrent\futures\_base.py", line 390, in __get_result
        raise self._exception
    vapoursynth.Error: The height and width of low-res inputs must be at least 64, but got 66 and 34.
    

    But my Video is larger. Looking into it, executing Model 5 downscales the video. Or at least its internal data seems to be downscaled. While the returned clip reports the correct size, the frames seem to be lower. A quarter to be precise.

    This seems to be the code causing it: https://github.com/HolyWu/vs-basicvsrpp/blob/da066461f66c6e7deedf354630899b815393836b/vsbasicvsrpp/basicvsr_pp.py#L291

    And this is the script to reproduce it (It's a trimmed down version, that's why model 5 and 1 are executed directly after one another. My pipeline has some steps inbetween): Reproduce.zip

    If you need that specific video from my script, I can share that with you but would prefer to do that outside of this report, as it's a game asset.

    opened by Memnarch 2
  • got a warning,...

    got a warning,...

    Running vs-basicvsrpp I get:

    I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\nn\functional.py:3657: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
      warnings.warn(
    

    should I do something or should I ignore this?

    opened by Selur 2
  • calling

    calling "python -m vsbasicvsrpp" fails

    Using a portable Vapoursynth R58 and calling python -m pip install --upgrade vsbasicvsrpp gives me

    Requirement already satisfied: vsbasicvsrpp in i:\hybrid\64bit\vapoursynth\lib\site-packages (1.4.1)
    Requirement already satisfied: torchvision in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (0.12.0)
    Requirement already satisfied: mmcv-full>=1.3.13 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (1.5.0)
    Requirement already satisfied: numpy in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (1.22.3)
    Requirement already satisfied: torch>=1.9.0 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (1.11.0)
    Requirement already satisfied: regex in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2022.3.15)
    Requirement already satisfied: pyyaml in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (6.0)
    Requirement already satisfied: packaging in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (21.3)
    Requirement already satisfied: Pillow in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (9.1.0)
    Requirement already satisfied: addict in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2.4.0)
    Requirement already satisfied: yapf in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (0.32.0)
    Requirement already satisfied: typing-extensions in i:\hybrid\64bit\vapoursynth\lib\site-packages (from torch>=1.9.0->vsbasicvsrpp) (4.2.0)
    Requirement already satisfied: requests in i:\hybrid\64bit\vapoursynth\lib\site-packages (from torchvision->vsbasicvsrpp) (2.27.1)
    Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from packaging->mmcv-full>=1.3.13->vsbasicvsrpp) (3.0.8)
    Requirement already satisfied: idna<4,>=2.5 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (3.3)
    Requirement already satisfied: certifi>=2017.4.17 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (2021.10.8)
    Requirement already satisfied: urllib3<1.27,>=1.21.1 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (1.26.9)
    Requirement already satisfied: charset-normalizer~=2.0.0 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (2.0.12)
    

    which looks fine to me. Problem is, calling python -m vsbasicvsrpp gives me:

    Traceback (most recent call last):
      File "runpy.py", line 187, in _run_module_as_main
      File "runpy.py", line 146, in _get_module_details
      File "runpy.py", line 110, in _get_module_details
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsbasicvsrpp\__init__.py", line 10, in <module>
        from .basicvsr_pp import BasicVSRPlusPlus
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsbasicvsrpp\basicvsr_pp.py", line 8, in <module>
        from mmcv.ops import ModulatedDeformConv2d, modulated_deform_conv2d
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\mmcv\ops\__init__.py", line 2, in <module>
        from .active_rotated_filter import active_rotated_filter
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\mmcv\ops\active_rotated_filter.py", line 8, in <module>
        ext_module = ext_loader.load_ext(
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\mmcv\utils\ext_loader.py", line 13, in load_ext
        ext = importlib.import_module('mmcv.' + name)
      File "importlib\__init__.py", line 126, in import_module
    

    any idea what I'm doing wrong / where the problem is?

    Calling python -m pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11/index.html gives me:

    Looking in links: https://download.openmmlab.com/mmcv/dist/cu113/torch1.11/index.html
    Requirement already satisfied: mmcv-full in i:\hybrid\64bit\vapoursynth\lib\site-packages (1.5.0)
    Requirement already satisfied: pyyaml in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (6.0)
    Requirement already satisfied: numpy in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (1.22.3)
    Requirement already satisfied: addict in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (2.4.0)
    Requirement already satisfied: Pillow in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (9.1.0)
    Requirement already satisfied: yapf in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (0.32.0)
    Requirement already satisfied: regex in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (2022.3.15)
    Requirement already satisfied: packaging in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (21.3)
    Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from packaging->mmcv-full) (3.0.8)
    

    which seems fine too, to me.

    Cu Selur

    opened by Selur 1
  • NTIRE 2021 Video Super-Resolution

    NTIRE 2021 Video Super-Resolution

    opened by AIisCool 1
  • Multi-gpu support?

    Multi-gpu support?

    I tried to utilize 4 gpus for inference with the following code but it didn't work. Only one of the gpu was doing the job at a time and the others were idling. Are there any suggested ways for multi-gpu inference?

    import vapoursynth as vs
    import os
    from vsbasicvsrpp import BasicVSRPP
    core = vs.core
    
    folder = r'C:\Users\test\Desktop\vs-58'
    file = r'test.m4v'
    
    src = os.path.join(folder, file)
    
    src = core.ffms2.Source(src)
    
    src = core.fmtc.resample (clip=src, css="444")
    src = core.fmtc.matrix (clip=src, mat="709", col_fam=vs.RGB)
    src = core.fmtc.bitdepth (clip=src, bits=32)
    
    interval = 180
    n = 4
    
    add = (interval*n) - len(src) % (interval*n)
    if add>0:
    	src = src + core.std.BlankClip(src, length=add)
    
    c1 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*0, interval)])
    c1 = BasicVSRPP(c1, model=5, interval=interval, device_index=0, fp16=True)
    
    c2 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*1, interval*2)])
    c2 = BasicVSRPP(c2, model=5, interval=interval, device_index=1, fp16=True)
    
    c3 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*2, interval*3)])
    c3 = BasicVSRPP(c3, model=5, interval=interval, device_index=2, fp16=True)
    
    c4 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*3, interval*4)])
    c4 = BasicVSRPP(c4, model=5, interval=interval, device_index=3, fp16=True)
    
    
    c = core.std.Interleave(clips=[c1, c2, c3, c4])
    
    a = [i for i in range(interval*n) if i % n ==0] + [i for i in range(interval*n) if i % n ==1] + [i for i in range(interval*n) if i % n ==2] + [i for i in range(interval*n) if i % n ==3]
    
    c = core.std.SelectEvery(clip=c, cycle=interval*n, offsets=a)
    
    c = core.fmtc.matrix (clip=c, mat="709", col_fam=vs.YUV)
    c = core.fmtc.resample (clip=c, css="420")
    c = core.fmtc.bitdepth(clip =c, bits=16)   
    
    if add>0:
    	c = c[:-add]
    
    c.set_output()
    
    opened by Bouby308 0
  • deepfillv2 support

    deepfillv2 support

    thanks for your effort:smile: At present, there is't a good inpaint method in vs. i notice mmediting also support deepfillv2, how about port it to vs (it performs good for image, maybe temporal flicker for video)

    opened by soldivelot 0
Releases(v1.4.1)
Owner
Holy Wu
Holy Wu
Black box hyperparameter optimization made easy.

BBopt BBopt aims to provide the easiest hyperparameter optimization you'll ever do. Think of BBopt like Keras (back when Theano was still a thing) for

Evan Hubinger 70 Nov 03, 2022
PyTorch implementation of a collections of scalable Video Transformer Benchmarks.

PyTorch implementation of Video Transformer Benchmarks This repository is mainly built upon Pytorch and Pytorch-Lightning. We wish to maintain a colle

Xin Ma 156 Jan 08, 2023
B-cos Networks: Attention is All we Need for Interpretability

Convolutional Dynamic Alignment Networks for Interpretable Classifications M. Böhle, M. Fritz, B. Schiele. B-cos Networks: Alignment is All we Need fo

58 Dec 23, 2022
Aesara is a Python library that allows one to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays.

Aesara is a Python library that allows one to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays.

Aesara 898 Jan 07, 2023
CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution

CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution This is the official implementation code of the paper "CondLaneNe

Alibaba Cloud 311 Dec 30, 2022
Image Processing, Image Smoothing, Edge Detection and Transforms

opevcvdl-hw1 This project uses openCV and Qt to achieve the requirements. Version Python 3.7 opencv-contrib-python 3.4.2.17 Matplotlib 3.1.1 pyqt5 5.1

Kenny Cheng 3 Aug 17, 2022
Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechanism

Period-alternatives-of-Softmax Experimental Demo for our paper 'Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechani

slwang9353 0 Sep 06, 2021
Code for ACL 21: Generating Query Focused Summaries from Query-Free Resources

marge This repository releases the code for Generating Query Focused Summaries from Query-Free Resources. Please cite the following paper [bib] if you

Yumo Xu 28 Nov 10, 2022
PyTorch implementation of Federated Learning with Non-IID Data, and federated learning algorithms, including FedAvg, FedProx.

Federated Learning with Non-IID Data This is an implementation of the following paper: Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, Vik

Youngjoon Lee 48 Dec 29, 2022
Speech Emotion Recognition with Fusion of Acoustic- and Linguistic-Feature-Based Decisions

APSIPA-SER-with-A-and-T This code is the implementation of Speech Emotion Recognition (SER) with acoustic and linguistic features. The network model i

kenro515 3 Jan 04, 2023
Using pytorch to implement unet network for liver image segmentation.

Using pytorch to implement unet network for liver image segmentation.

zxq 1 Dec 17, 2021
利用Tensorflow实现基于CNN的中文短文本分类

Text Classification with CNN 使用卷积神经网络进行中文文本分类 CNN做句子分类的论文可以参看: Convolutional Neural Networks for Sentence Classification 还可以去读dennybritz大牛的博客:Implemen

Jeremiah 4 Nov 08, 2022
MazeRL is an application oriented Deep Reinforcement Learning (RL) framework

MazeRL is an application oriented Deep Reinforcement Learning (RL) framework, addressing real-world decision problems. Our vision is to cover the complete development life cycle of RL applications ra

EnliteAI GmbH 222 Dec 24, 2022
A Python package to process & model ChEMBL data.

insilico: A Python package to process & model ChEMBL data. ChEMBL is a manually curated chemical database of bioactive molecules with drug-like proper

Steven Newton 0 Dec 09, 2021
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.

English | 简体中文 | 繁體中文 State-of-the-art Natural Language Processing for Jax, PyTorch and TensorFlow 🤗 Transformers provides thousands of pretrained mo

Hugging Face 77.2k Jan 02, 2023
Code To Tune or Not To Tune? Zero-shot Models for Legal Case Entailment.

COLIEE 2021 - task 2: Legal Case Entailment This repository contains the code to reproduce NeuralMind's submissions to COLIEE 2021 presented in the pa

NeuralMind 13 Dec 16, 2022
Molecular Sets (MOSES): A benchmarking platform for molecular generation models

Molecular Sets (MOSES): A benchmarking platform for molecular generation models Deep generative models are rapidly becoming popular for the discovery

Neelesh C A 3 Oct 14, 2022
AutoML library for deep learning

Official Website: autokeras.com AutoKeras: An AutoML system based on Keras. It is developed by DATA Lab at Texas A&M University. The goal of AutoKeras

Keras 8.7k Jan 08, 2023
Code for EMNLP2021 paper "Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training"

VoCapXLM Code for EMNLP2021 paper Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training Environment DockerFile: dancingso

Bo Zheng 15 Jul 28, 2022
Geometric Vector Perceptron --- a rotation-equivariant GNN for learning from biomolecular structure

Geometric Vector Perceptron Code to accompany Learning from Protein Structure with Geometric Vector Perceptrons by B Jing, S Eismann, P Suriana, RJL T

Dror Lab 85 Dec 29, 2022