BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

Overview

BasicVSR++

BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

Ported from https://github.com/open-mmlab/mmediting

Dependencies

Installing mmcv-full on Windows is a bit complicated as it requires Visual Studio and other tools to compile CUDA ops. So I have uploaded the built file compiled with CUDA 11.1 for Windows users and you can install it by executing the following command.

pip install https://github.com/HolyWu/vs-basicvsrpp/releases/download/v1.0.0/mmcv_full-1.3.12-cp39-cp39-win_amd64.whl

Installation

pip install --upgrade vsbasicvsrpp
python -m vsbasicvsrpp

Usage

from vsbasicvsrpp import BasicVSRPP

ret = BasicVSRPP(clip)

See __init__.py for the description of the parameters.

Comments
  • Question about the tiling,...

    Question about the tiling,...

    I got a Geforce GTX 1070ti with 8 GB of vram. (I know it's not new and really that suited for this, but that's what I got. :)) If I crop my source to a 480x480 chunk and run BasicVSR++ on it ~1.4-3.5GB VRAM are used. Not cropping my source, I thought that even with some padding I thought that using "BasicVSRPP(clip=clip, model=5, tile_x=480, tile_y=480)" would allow me to filter HD an UHD clips which it does not. Question is: Why? Shouldn't this work with a 480x480 tiling?

    What are the min tile with and height sizes (I assumed it would be 64*4=256 + padding, but when using 320x320 I get: Python exception: Analyse: failed to retrieve first frame from super clip. Error message: The height and width of low-res inputs must be at least 64, but got 84 and 44. Using 392x392 I get The height and width of low-res inputs must be at least 64, but got 102 and 26. ->Just from testing I don't get what the tiling does at all. :)

    opened by Selur 5
  • Question regarding some Arguments

    Question regarding some Arguments

    Hi, My appologies, I'm not sure if I forgot to submit my last typed issue or if it was removed. If it was removed, feel free to close this without a comment. I typed it this morning, and honestly, can't remember if I submitted or closed the browser window by accident.

    I have a few questions for some of the arguments one can set, as those are a bit different from the mmediting interface, if I have seen that correctly.

    Interval: This specifies the number of Images per Batch, is that correct? In addition of reducing the VRam footprint, I assume it influences what the network can see during the upscale process? The smaller the batch, the smaller the window it can include in the temporal calculations?

    Tiling: This splits the image into tiles for processing instead of the whole image, is that correct? What is the purpose or best case scenario someone would use this for?

    FP16: I run the network with FP16 at the moment, as FP32 usually blows up my 11GB of VRam if I don't reduce the Interval. As I had not enough time to finish all my Tests yesterday, have you noticed any degradation in quality in Images when FP16 is used? If I want to run it with FP32, I need to reduce the Interval but if my assumption is correct, this narrows down the time-window the network can calculate across. So I'm in between Precision vs Intervall-Size for VRam usage. Do you have any experience in this and what a good tradeoff might look like.

    opened by Memnarch 4
  • mmcv-full on Windows and Python 3.10

    mmcv-full on Windows and Python 3.10

    Since Vapoursynth RC58 either need Python 3.8 (Win7 compatible) or Python 3.10 (which I'm using), the current mmcv_full-1.3.16-cp39-cp39-win_amd64.whl is not a supported wheel since it's meant for Python 3.9. Would be nice if you could create a mmcv-full for Python 3.10 in Windows. Thanks!

    opened by Selur 2
  • Sizes of tensors must match except in dimension 3. Got 213 and 214 (The offending index is 0)

    Sizes of tensors must match except in dimension 3. Got 213 and 214 (The offending index is 0)

    python 3.9.7 torch 1.9.1 mmcv 1.3.13/1.4.4 vs r57 vs fatpack

    import vapoursynth as vs core = vs.core

    from vsbasicvsrpp import BasicVSRPP

    video = core.ffms2.Source(source=r'D:\winpython\VapourSynth64Portable\in.mp4')

    video = core.resize.Bicubic(clip=video, format=vs.RGBS, matrix_in_s="709")

    video = BasicVSRPP(clip=video, interval=10, model=5, fp16=True, tile_pad=0)

    video = core.resize.Bicubic(video, format=vs.YUV420P8, matrix_s="709")

    video.set_output()

    opened by oblessnoob 2
  • meshgrid() got an unexpected keyword argument 'indexing'

    meshgrid() got an unexpected keyword argument 'indexing'

    Using the v1.4.0 and: clip = BasicVSRPP(clip=clip, model=3, tile_x=352, tile_y=480, fp16=True) full script:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    # Loading Plugins
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # source: 'C:\Users\Selur\Desktop\VTS_02_1-Sample-Beginning.demuxed.m2v'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
    # Loading C:\Users\Selur\Desktop\VTS_02_1-Sample-Beginning.demuxed.m2v using D2VSource
    clip = core.d2v.Source(input="E:/Temp/m2v_154f3f0f52f994b09117b9c8650e17d2_853323747.d2v")
    # making sure input color matrix is set as 470bg
    clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited")
    # making sure frame rate is set to 29.97
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip)# new fps: 23.976
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    # DEBUG: vsTIVTC changed scanorder to: progressive
    # cropping the video to 704x480
    clip = core.std.CropRel(clip=clip, left=6, right=10, top=0, bottom=0)
    # adjusting color space from YUV420P8 to RGBS for vsBasicVSRPPFilter
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # Quality enhancement using BasicVSR++
    from vsbasicvsrpp import BasicVSRPP
    clip = BasicVSRPP(clip=clip, model=3, tile_x=352, tile_y=480, fp16=True)
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
    # set output frame rate to 23.976fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
    # Output
    clip.set_output()
    

    I get:

    Error on frame 0 request:
    meshgrid() got an unexpected keyword argument 'indexing'
    
    opened by Selur 2
  • ImportError: DLL load failed while importing _ext: The specified procedure could not be found.

    ImportError: DLL load failed while importing _ext: The specified procedure could not be found.

    Using Python 3.9.7, I'm having the following issue when I try to finish installing vsbasicvsrpp with python -m vsbasicvsrpp.

    I had installed it just fine weeks ago, but I attempted to update PyTorch last night with: pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html and that seemed to break everything, so I managed to fix it, but vsbasicvsrpp remains unfixed due to the following.

    (c) Microsoft Corporation. All rights reserved.
    
    C:\WINDOWS\system32>pip install --upgrade vsbasicvsrpp
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    Collecting vsbasicvsrpp
      Using cached vsbasicvsrpp-1.3.0-py3-none-any.whl (21 kB)
    Requirement already satisfied: torchvision in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (0.11.1+cu113)
    Requirement already satisfied: mmcv-full>=1.3.13 in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (1.3.14)
    Requirement already satisfied: torch>=1.9.0 in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (1.10.0+cu113)
    Requirement already satisfied: numpy in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (1.21.3)
    Requirement already satisfied: Pillow in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (8.3.1)
    Requirement already satisfied: packaging in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (21.0)
    Requirement already satisfied: addict in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2.4.0)
    Requirement already satisfied: yapf in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (0.31.0)
    Requirement already satisfied: regex in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2021.8.28)
    Requirement already satisfied: pyyaml in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (5.4.1)
    Requirement already satisfied: typing-extensions in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from torch>=1.9.0->vsbasicvsrpp) (3.10.0.0)
    Requirement already satisfied: pyparsing>=2.0.2 in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from packaging->mmcv-full>=1.3.13->vsbasicvsrpp) (2.4.7)
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    Installing collected packages: vsbasicvsrpp
    Successfully installed vsbasicvsrpp-1.3.0
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    
    C:\WINDOWS\system32>python -m vsbasicvsrpp
    Traceback (most recent call last):
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 188, in _run_module_as_main
        mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 147, in _get_module_details
        return _get_module_details(pkg_main_name, error)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 111, in _get_module_details
        __import__(pkg_name)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\vsbasicvsrpp\__init__.py", line 10, in <module>
        from .basicvsr_pp import BasicVSRPlusPlus
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\vsbasicvsrpp\basicvsr_pp.py", line 8, in <module>
        from mmcv.ops import ModulatedDeformConv2d, modulated_deform_conv2d
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\mmcv\ops\__init__.py", line 2, in <module>
        from .ball_query import ball_query
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\mmcv\ops\ball_query.py", line 7, in <module>
        ext_module = ext_loader.load_ext('_ext', ['ball_query_forward'])
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\mmcv\utils\ext_loader.py", line 13, in load_ext
        ext = importlib.import_module('mmcv.' + name)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\importlib\__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
    ImportError: DLL load failed while importing _ext: The specified procedure could not be found.
    opened by AIisCool 2
  • Adjust strength?

    Adjust strength?

    First off I wish to say thank you very much for all the work you do HolyWu!

    I love how much noise vs-basicvsrpp removes from the video, but sometimes it's a little too much (ie. removes something that isn't noise at all). Is there any way to adjust the strength of it similar to DPIR?

    Thank you.

    opened by AIisCool 2
  • Model 5 seems to downscale by 4

    Model 5 seems to downscale by 4

    Hi, While I got good results, they did not seem to be quite as sharp as I expected them. When I tried to process a 136x264 Video, I suddenly got the error:

    Traceback (most recent call last):
      File "E:\Git\RivenTools\Reproduce.py", line 26, in <module>
        video.output(f, y4m=True)
      File "src\cython\vapoursynth.pyx", line 1790, in vapoursynth.VideoNode.output
      File "src\cython\vapoursynth.pyx", line 1655, in frames
      File "D:\Program Files\Python39\lib\concurrent\futures\_base.py", line 445, in result
        return self.__get_result()
      File "D:\Program Files\Python39\lib\concurrent\futures\_base.py", line 390, in __get_result
        raise self._exception
    vapoursynth.Error: The height and width of low-res inputs must be at least 64, but got 66 and 34.
    

    But my Video is larger. Looking into it, executing Model 5 downscales the video. Or at least its internal data seems to be downscaled. While the returned clip reports the correct size, the frames seem to be lower. A quarter to be precise.

    This seems to be the code causing it: https://github.com/HolyWu/vs-basicvsrpp/blob/da066461f66c6e7deedf354630899b815393836b/vsbasicvsrpp/basicvsr_pp.py#L291

    And this is the script to reproduce it (It's a trimmed down version, that's why model 5 and 1 are executed directly after one another. My pipeline has some steps inbetween): Reproduce.zip

    If you need that specific video from my script, I can share that with you but would prefer to do that outside of this report, as it's a game asset.

    opened by Memnarch 2
  • got a warning,...

    got a warning,...

    Running vs-basicvsrpp I get:

    I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\nn\functional.py:3657: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
      warnings.warn(
    

    should I do something or should I ignore this?

    opened by Selur 2
  • calling

    calling "python -m vsbasicvsrpp" fails

    Using a portable Vapoursynth R58 and calling python -m pip install --upgrade vsbasicvsrpp gives me

    Requirement already satisfied: vsbasicvsrpp in i:\hybrid\64bit\vapoursynth\lib\site-packages (1.4.1)
    Requirement already satisfied: torchvision in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (0.12.0)
    Requirement already satisfied: mmcv-full>=1.3.13 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (1.5.0)
    Requirement already satisfied: numpy in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (1.22.3)
    Requirement already satisfied: torch>=1.9.0 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (1.11.0)
    Requirement already satisfied: regex in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2022.3.15)
    Requirement already satisfied: pyyaml in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (6.0)
    Requirement already satisfied: packaging in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (21.3)
    Requirement already satisfied: Pillow in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (9.1.0)
    Requirement already satisfied: addict in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2.4.0)
    Requirement already satisfied: yapf in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (0.32.0)
    Requirement already satisfied: typing-extensions in i:\hybrid\64bit\vapoursynth\lib\site-packages (from torch>=1.9.0->vsbasicvsrpp) (4.2.0)
    Requirement already satisfied: requests in i:\hybrid\64bit\vapoursynth\lib\site-packages (from torchvision->vsbasicvsrpp) (2.27.1)
    Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from packaging->mmcv-full>=1.3.13->vsbasicvsrpp) (3.0.8)
    Requirement already satisfied: idna<4,>=2.5 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (3.3)
    Requirement already satisfied: certifi>=2017.4.17 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (2021.10.8)
    Requirement already satisfied: urllib3<1.27,>=1.21.1 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (1.26.9)
    Requirement already satisfied: charset-normalizer~=2.0.0 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (2.0.12)
    

    which looks fine to me. Problem is, calling python -m vsbasicvsrpp gives me:

    Traceback (most recent call last):
      File "runpy.py", line 187, in _run_module_as_main
      File "runpy.py", line 146, in _get_module_details
      File "runpy.py", line 110, in _get_module_details
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsbasicvsrpp\__init__.py", line 10, in <module>
        from .basicvsr_pp import BasicVSRPlusPlus
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsbasicvsrpp\basicvsr_pp.py", line 8, in <module>
        from mmcv.ops import ModulatedDeformConv2d, modulated_deform_conv2d
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\mmcv\ops\__init__.py", line 2, in <module>
        from .active_rotated_filter import active_rotated_filter
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\mmcv\ops\active_rotated_filter.py", line 8, in <module>
        ext_module = ext_loader.load_ext(
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\mmcv\utils\ext_loader.py", line 13, in load_ext
        ext = importlib.import_module('mmcv.' + name)
      File "importlib\__init__.py", line 126, in import_module
    

    any idea what I'm doing wrong / where the problem is?

    Calling python -m pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11/index.html gives me:

    Looking in links: https://download.openmmlab.com/mmcv/dist/cu113/torch1.11/index.html
    Requirement already satisfied: mmcv-full in i:\hybrid\64bit\vapoursynth\lib\site-packages (1.5.0)
    Requirement already satisfied: pyyaml in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (6.0)
    Requirement already satisfied: numpy in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (1.22.3)
    Requirement already satisfied: addict in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (2.4.0)
    Requirement already satisfied: Pillow in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (9.1.0)
    Requirement already satisfied: yapf in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (0.32.0)
    Requirement already satisfied: regex in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (2022.3.15)
    Requirement already satisfied: packaging in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (21.3)
    Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from packaging->mmcv-full) (3.0.8)
    

    which seems fine too, to me.

    Cu Selur

    opened by Selur 1
  • NTIRE 2021 Video Super-Resolution

    NTIRE 2021 Video Super-Resolution

    opened by AIisCool 1
  • Multi-gpu support?

    Multi-gpu support?

    I tried to utilize 4 gpus for inference with the following code but it didn't work. Only one of the gpu was doing the job at a time and the others were idling. Are there any suggested ways for multi-gpu inference?

    import vapoursynth as vs
    import os
    from vsbasicvsrpp import BasicVSRPP
    core = vs.core
    
    folder = r'C:\Users\test\Desktop\vs-58'
    file = r'test.m4v'
    
    src = os.path.join(folder, file)
    
    src = core.ffms2.Source(src)
    
    src = core.fmtc.resample (clip=src, css="444")
    src = core.fmtc.matrix (clip=src, mat="709", col_fam=vs.RGB)
    src = core.fmtc.bitdepth (clip=src, bits=32)
    
    interval = 180
    n = 4
    
    add = (interval*n) - len(src) % (interval*n)
    if add>0:
    	src = src + core.std.BlankClip(src, length=add)
    
    c1 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*0, interval)])
    c1 = BasicVSRPP(c1, model=5, interval=interval, device_index=0, fp16=True)
    
    c2 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*1, interval*2)])
    c2 = BasicVSRPP(c2, model=5, interval=interval, device_index=1, fp16=True)
    
    c3 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*2, interval*3)])
    c3 = BasicVSRPP(c3, model=5, interval=interval, device_index=2, fp16=True)
    
    c4 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*3, interval*4)])
    c4 = BasicVSRPP(c4, model=5, interval=interval, device_index=3, fp16=True)
    
    
    c = core.std.Interleave(clips=[c1, c2, c3, c4])
    
    a = [i for i in range(interval*n) if i % n ==0] + [i for i in range(interval*n) if i % n ==1] + [i for i in range(interval*n) if i % n ==2] + [i for i in range(interval*n) if i % n ==3]
    
    c = core.std.SelectEvery(clip=c, cycle=interval*n, offsets=a)
    
    c = core.fmtc.matrix (clip=c, mat="709", col_fam=vs.YUV)
    c = core.fmtc.resample (clip=c, css="420")
    c = core.fmtc.bitdepth(clip =c, bits=16)   
    
    if add>0:
    	c = c[:-add]
    
    c.set_output()
    
    opened by Bouby308 0
  • deepfillv2 support

    deepfillv2 support

    thanks for your effort:smile: At present, there is't a good inpaint method in vs. i notice mmediting also support deepfillv2, how about port it to vs (it performs good for image, maybe temporal flicker for video)

    opened by soldivelot 0
Releases(v1.4.1)
Owner
Holy Wu
Holy Wu
Shallow Convolutional Neural Networks for Human Activity Recognition using Wearable Sensors

-IEEE-TIM-2021-1-Shallow-CNN-for-HAR [IEEE TIM 2021-1] Shallow Convolutional Neural Networks for Human Activity Recognition using Wearable Sensors All

Wenbo Huang 1 May 17, 2022
Meandering In Networks of Entities to Reach Verisimilar Answers

MINERVA Meandering In Networks of Entities to Reach Verisimilar Answers Code and models for the paper Go for a Walk and Arrive at the Answer - Reasoni

Shehzaad Dhuliawala 271 Dec 13, 2022
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Good news! We release a clean version of PVNet: clean-pvnet, including how to train the PVNet on the custom dataset. Use PVNet with a detector. The tr

ZJU3DV 722 Dec 27, 2022
StyleGAN of All Trades: Image Manipulation withOnly Pretrained StyleGAN

StyleGAN of All Trades: Image Manipulation withOnly Pretrained StyleGAN This is the PyTorch implementation of StyleGAN of All Trades: Image Manipulati

360 Dec 28, 2022
Optimal space decomposition based-product quantization for approximate nearest neighbor search

Optimal space decomposition based-product quantization for approximate nearest neighbor search Abstract Product quantization(PQ) is an effective neare

Mylove 1 Nov 19, 2021
RepVGG: Making VGG-style ConvNets Great Again

RepVGG: Making VGG-style ConvNets Great Again (PyTorch) This is a super simple ConvNet architecture that achieves over 80% top-1 accuracy on ImageNet

2.8k Jan 04, 2023
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk

Annoy Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given quer

Spotify 10.6k Jan 04, 2023
LoL Runes Recommender With Python

LoL-Runes-Recommender Para ejecutar la aplicación se debe llamar a execute_app.p

Sebastián Salinas 1 Jan 10, 2022
Convolutional Neural Networks

Darknet Darknet is an open source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation. D

Joseph Redmon 23.7k Jan 05, 2023
A python code to convert Keras pre-trained weights to Pytorch version

Weights_Keras_2_Pytorch 最近想在Pytorch项目里使用一下谷歌的NIMA,但是发现没有预训练好的pytorch权重,于是整理了一下将Keras预训练权重转为Pytorch的代码,目前是支持Keras的Conv2D, Dense, DepthwiseConv2D, Batch

Liu Hengyu 2 Dec 16, 2021
Generate pixel-style avatars with python.

face2pixel Generate pixel-style avatars with python. Run: Clone the project: git clone https://github.com/theodorecooper/face2pixel install requiremen

Theodore Cooper 2 May 11, 2022
Focal and Global Knowledge Distillation for Detectors

FGD Paper: Focal and Global Knowledge Distillation for Detectors Install MMDetection and MS COCO2017 Our codes are based on MMDetection. Please follow

Mesopotamia 261 Dec 23, 2022
PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020).

NHDRRNet-PyTorch This is the PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020). 0. Differences between Original Paper and

Yutong Zhang 1 Mar 01, 2022
YOLOv5 detection interface - PyQt5 implementation

所有代码已上传,直接clone后,运行yolo_win.py即可开启界面。 2021/9/29:加入置信度选择 界面是在ultralytics的yolov5基础上建立的,界面使用pyqt5实现,内容较简单,娱乐而已。 功能: 模型选择 本地文件选择(视频图片均可) 开关摄像头

487 Dec 27, 2022
Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated Learning

Federated_Learning This repo provides a federated learning framework that allows to carry out backdoor attacks under varying conditions. This is a ker

Arçelik ARGE Açık Kaynak Yazılım Organizasyonu 0 Nov 30, 2021
Python implementation of the multistate Bennett acceptance ratio (MBAR)

pymbar Python implementation of the multistate Bennett acceptance ratio (MBAR) method for estimating expectations and free energy differences from equ

Chodera lab // Memorial Sloan Kettering Cancer Center 169 Dec 02, 2022
Unified API to facilitate usage of pre-trained "perceptor" models, a la CLIP

mmc installation git clone https://github.com/dmarx/Multi-Modal-Comparators cd 'Multi-Modal-Comparators' pip install poetry poetry build pip install d

David Marx 37 Nov 25, 2022
An implementation of shampoo

shampoo.pytorch An implementation of shampoo, proposed in Shampoo : Preconditioned Stochastic Tensor Optimization by Vineet Gupta, Tomer Koren and Yor

Ryuichiro Hataya 69 Sep 10, 2022
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [2021]

Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations This repo contains the Pytorch implementation of our paper: Revisit

Wouter Van Gansbeke 80 Nov 20, 2022