Easy and Efficient Object Detector

Related tags

Deep LearningEOD
Overview

EOD

image

Easy and Efficient Object Detector

EOD (Easy and Efficient Object Detection) is a general object detection model production framework. It aim on provide two key feature about Object Detection:

  • Efficient: we will focus on training VERY HIGH ACCURARY single-shot detection model, and model compression (quantization/sparsity) will be well addressed.
  • Easy: easy to use, easy to add new features(backbone/head/neck), easy to deploy.
  • Large-Scale Dataset Training Detail
  • Equalized Focal Loss for Dense Long-Tailed Object Detection EFL
  • Improve-YOLOX YOLOX-RET
  • Quantization Aware Training(QAT) interface based on MQBench.

The master branch works with PyTorch 1.8.1. Due to the pytorch version, it can not well support the 30 series graphics card hardware.

Install

pip install -r requirments

Get Started

Some example scripts are supported in scripts/.

Export Module

Export eod into ROOT and PYTHONPATH

ROOT=../../
export ROOT=$ROOT
export PYTHONPATH=$ROOT:$PYTHONPATH

Train

Step1: edit meta_file and image_dir of image_reader:

dataset:
  type: coco # dataset type
    kwargs:
      source: train
      meta_file: coco/annotations/instances_train2017.json 
      image_reader:
        type: fs_opencv
        kwargs:
          image_dir: coco/train2017
          color_mode: BGR

Step2: train

python -m eod train --config configs/det/yolox/yolox_tiny.yaml --nm 1 --ng 8 --launch pytorch 2>&1 | tee log.train
  • --config: yamls in configs/
  • --nm: machine number
  • --ng: gpu number for each machine
  • --launch: slurm or pytorch

Step3: fp16, add fp16 setting into runtime config

runtime:
    fp16: True

Eval

Step1: edit config of evaluating dataset

Step2: test

python -m eod train -e --config configs/det/yolox/yolox_tiny.yaml --nm 1 --ng 1 --launch pytorch 2>&1 | tee log.test

Demo

Step1: add visualizer config in yaml

inference:
  visualizer:
    type: plt
    kwargs:
      class_names: ['__background__', 'person'] # class names
      thresh: 0.5

Step2: inference

python -m eod inference --config configs/det/yolox/yolox_tiny.yaml --ckpt ckpt_tiny.pth -i imgs -v vis_dir
  • --ckpt: model for inferencing
  • -i: images directory or single image
  • -v: directory saving visualization results

Mpirun mode

EOD supports mpirun mode to launch task, MPI needs to be installed firstly

# download mpich
wget https://www.mpich.org/static/downloads/3.2.1/mpich-3.2.1.tar.gz # other versions: https://www.mpich.org/static/downloads/

tar -zxvf mpich-3.2.1.tar.gz
cd mpich-3.2.1
./configure  --prefix=/usr/local/mpich-3.2.1
make && make install

Launch task

mpirun -np 8 python -m eod train --config configs/det/yolox/yolox_tiny.yaml --launch mpi 2>&1 | tee log.train
  • Add mpirun -np x; x indicates number of processes
  • Mpirun is convenient to debug with pdb
  • --launch: mpi

Custom Example

Benckmark

Quick Run

Tutorials

Useful Tools

References

Acknowledgments

Thanks to all past contributors, especially opcoder,

Comments
  • Questions about EFL code implementation

    Questions about EFL code implementation

    Hello, can you answer the mechanism of action of the gradient collection function? Although the gradient gathering function is defined in the forward propagation function, it does not seem to call this function. Even if self.pos_neg.detach() is used, what is the input parameter in the collect_grad() function? Does it really work?

    image

    opened by xc-chengdu 8
  • How to use quant_runner

    How to use quant_runner

    Thank you for the excellent work of MQBench and EOD. I am interested in the work of quantization and I have tried the config of retinanet-r50_1x_quant.yaml. However, there are some errors. Besides, I found that there is no quantitative document in this project. Can you give some suggestions to use the quant_runner.

    Here are the errors I encountered when use retinanet-r50_1x_quant.yaml:

    error_1

    File "/home/user/miniconda3/envs/eod/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/home/user/project/EOD/eod/utils/env/launch.py", line 117, in _distributed_worker main_func(args) File "/home/user/project/EOD/eod/commands/train.py", line 121, in main runner = RUNNER_REGISTRY.get(runner_cfg['type'])(cfg, **runner_cfg['kwargs']) File "/home/user/project/EOD/eod/runner/quant_runner.py", line 14, in init super(QuantRunner, self).init(config, work_dir, training) File "/home/user/project/EOD/eod/runner/base_runner.py", line 52, in init self.build() File "/home/user/project/EOD/eod/runner/quant_runner.py", line 32, in build self.quantize_model() File "/home/user/project/EOD/eod/runner/quant_runner.py", line 68, in quantize_model from mqbench.prepare_by_platform import prepare_by_platform ImportError: cannot import name 'prepare_by_platform' from 'mqbench.prepare_by_platform' (/home/user/project/MQBench/mqbench/prepare_by_platform.py)

    solved by modifying the EOD/eod/runner/quant_runner.py 68-72:

    from mqbench.prepare_by_platform import prepare_qat_fx_by_platform
    logger.info("prepare quantize model")
    deploy_backend = self.config['quant']['deploy_backend']
    prepare_args = self.config['quant'].get('prepare_args', {})
    self.model = prepare_qat_fx_by_platform(self.model, self.backend_type[deploy_backend], prepare_args)
    

    error_2

    I can use single gpu train the quant model, but when using multiple gpus I meet the error below, which is still unsolved.

    Traceback (most recent call last): File "/home/user/miniconda3/envs/eod/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/home/user/project/EOD/eod/utils/env/launch.py", line 117, in _distributed_worker main_func(args) File "/home/user/project/EOD/eod/commands/train.py", line 121, in main runner = RUNNER_REGISTRY.get(runner_cfg['type'])(cfg, **runner_cfg['kwargs']) File "/home/user/project/EOD/eod/runner/quant_runner.py", line 15, in init super(QuantRunner, self).init(config, work_dir, training) File "/home/user/project/EOD/eod/runner/base_runner.py", line 52, in init self.build() File "/home/user/project/EOD/eod/runner/quant_runner.py", line 34, in build self.calibrate() File "/home/user/project/EOD/eod/runner/quant_runner.py", line 84, in calibrate self.model(batch) File "/home/user/miniconda3/envs/eod/lib/python3.8/site-packages/torch/fx/graph_module.py", line 513, in wrapped_call raise e.with_traceback(None) NameError: name 'dist' is not defined

    opened by feixiang7701 8
  • EOD/eod/models/heads/utils/bbox_helper.py

    EOD/eod/models/heads/utils/bbox_helper.py", line 341, in clip_bbox dw, dh = img_size[6], img_size[7] IndexError: list index out of range

    I found in Inferece.py the prepare are:

    def fetch_single(self, filename):
            img = self.image_reader.read(filename)
            data = EasyDict(
                {"filename": filename, "origin_image": img, "image": img, "flipped": False}
            )
            data = self.transformer(data)
            scale_factor = data.get("scale_factor", 1)
    
            image_h, image_w = get_image_size(img)
            new_image_h, new_image_w = get_image_size(data.image)
            data.image_info = [
                new_image_h,
                new_image_w,
                scale_factor,
                image_h,
                image_w,
                data.flipped,
                filename,
            ]
            data.image = data.image.cuda()
            return data
    

    which image_info max size is 7, so that above index [7] is out of indices. How to resolve?

    opened by jinfagang 5
  • no module named 'petrel_client', no module named 'spring_aux', import error No module named 'mqbench', import error No module named 'msbench.nn', free(): invalid pointer.

    no module named 'petrel_client', no module named 'spring_aux', import error No module named 'mqbench', import error No module named 'msbench.nn', free(): invalid pointer.

    Hello, 1、no module named 'petrel_client',; 2、no module named 'spring_aux',; 3、import error No module named 'mqbench'; 4、 import error No module named 'msbench.nn',; 5、free(): invalid pointer. Can you tell me how to tackle those problem?

    opened by trhao 4
  • Is sigmoid classifier suitable for multi-classification(num of categories > 1000 in LVIS) problems?

    Is sigmoid classifier suitable for multi-classification(num of categories > 1000 in LVIS) problems?

    Since you are based on the sigmoid classifier, I am curious if your detection results on LVIS will have many false positives in the same location but with different categories. The reason why I ask this is that, I used to train one-stage detector on datasets similar with LVIS (which is long tailed logo dataset, with 352 categories), however, I get many FP with different categories at the same location. I'm wondering if you have encountered the same situation. Thanks! alfaromeo5 And I think it may be due to the use of sigmoid classifier which consists of multiple independent binary classifiers. It may be not suitable for multi-classification(num of categories > 1000). Of course this is just my conjecture, any advice is welcome...

    opened by Icecream-blue-sky 4
  • [Urgent!!!]Where are kd_runner and bignas_runner? Why aren't there any branches of the warehouse? Is it because the code is not fully uploaded??????

    [Urgent!!!]Where are kd_runner and bignas_runner? Why aren't there any branches of the warehouse? Is it because the code is not fully uploaded??????

    When I tried the knowledge distillation and model search parts of the code, I had problems where kd_runner could not be found and bignas_runner could not be found, respectively. Is the warehouse code uploaded incomplete?

    opened by TheWangYang 2
  • (Resolved!!!) No module named 'petrel_client' init petrel failed No module named 'spring_aux' ImportError:  cannot import name 'gpu_iou_overlap' from 'up.extensions'.

    (Resolved!!!) No module named 'petrel_client' init petrel failed No module named 'spring_aux' ImportError: cannot import name 'gpu_iou_overlap' from 'up.extensions'.

    The following error occurs when the environment is configured and the following command is executed (the same error occurs on both Windows and Linux platforms) :

    sh scripts/dist_train.sh 2 configs/cls/resnet/resnet18.yaml
    
    No module named 'petrel_client'
    init petrel failed
    No module named 'spring_aux'
    
    2022-11-28 00:53:13,270-rk0-normalize.py#38:import error No module named 'mqbench'; If you need Mqbench to quantize model,      you should add Mqbench to this project. Or just ignore this error.
    2022-11-28 00:53:13,270-rk0-normalize.py#45:import error No module named 'msbench'; If you need Msbench to prune model,     you should add Msbench to this project. Or just ignore this error.
    
    Traceback (most recent call last):
    File "D:\anaconda3\envs\python37\lib\runpy.py", line 183, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
    File "D:\anaconda3\envs\python37\lib\runpy.py", line 142, in _get_module_details
    return _get_module_details(pkg_main_name, error)
    File "D:\anaconda3\envs\python37\lib\runpy.py", line 109, in _get_module_details
    __import__(pkg_name)
    File "D:\pycharm_work_place\United-Perception\up\__init__.py", line 26, in <module></module>
    from .tasks import *
    File "D:\pycharm_work_place\United-Perception\up\tasks\__init__.py", line 24, in <module></module>
    globals()[fp] = importlib.import_module('.' + fp, __package__)
    File "D:\anaconda3\envs\python37\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\__init__.py", line 2, in <module></module>
    from .models import * # noqa
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\__init__.py", line 1, in <module></module>
    from .heads import * # noqa
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\heads\__init__.py", line 2, in <module></module>
    from .bbox_head import *  # noqa
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\heads\bbox_head\__init__.py", line 1, in <module></module>
    from .bbox_head import * # noqa
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\heads\bbox_head\bbox_head.py", line 6, in <module></module>
    from up.tasks.det.models.utils.assigner import map_rois_to_level
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\utils\__init__.py", line 3, in <module></module>
    from .matcher import * # noqa
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\utils\matcher.py", line 6, in <module></module>
    from up.tasks.det.models.utils.bbox_helper import offset2bbox
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\utils\bbox_helper.py", line 10, in <module></module>
    from up.extensions import gpu_iou_overlap
    
    ImportError:  cannot import name 'gpu_iou_overlap' from 'up.extensions'  (D:\pycharm_work_place\United-Perception\up\extensions\__init__.py)
    
    

    Why can't the module be imported? What should I do? I hope the kind people can help me solve the problem, I will be grateful!

    opened by TheWangYang 2
  • 'Conv2d' object has no attribute 'register_full_backward_hook'

    'Conv2d' object has no attribute 'register_full_backward_hook'

    您好,我运行样例的eval或者train都会提示这个错误,这个是和安装时编译有关吗,按说是安装成功了,感谢帮助

    Traceback (most recent call last): File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/CondaEnv/UnitedDetection/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/CondaEnv/UnitedDetection/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/main.py", line 27, in main() File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/main.py", line 21, in main args.run(args) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/commands/train.py", line 161, in _main launch(main, args.num_gpus_per_machine, args.num_machines, args=args, start_method=args.fork_method) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/env/launch.py", line 68, in launch main_func(*(args,)) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/commands/train.py", line 140, in main runner = RUNNER_REGISTRY.get(runner_cfg['type'])(cfg, **runner_cfg['kwargs']) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/runner/base_runner.py", line 60, in init self.build() File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/runner/base_runner.py", line 103, in build self.build_hooks() File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/runner/base_runner.py", line 296, in build_hooks self._hooks = build_hooks(self, cfg_hooks, add_log_if_not_exists=True) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/general/hook_helper.py", line 1114, in build_hooks hooks = [build_single_hook(cfg) for cfg in cfg_list] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/general/hook_helper.py", line 1114, in hooks = [build_single_hook(cfg) for cfg in cfg_list] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/general/hook_helper.py", line 1109, in build_single_hook return HOOK_REGISTRY.build(cfg) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/general/registry.py", line 111, in build raise e File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/general/registry.py", line 101, in build return build_fn(**obj_kwargs) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/general/hook_helper.py", line 593, in init m.register_full_backward_hook(_backward_fn_hook) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/CondaEnv/UnitedDetection/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in getattr type(self).name, name)) torch.nn.modules.module.ModuleAttributeError: 'Conv2d' object has no attribute 'register_full_backward_hook'

    opened by KaichaoLiang 2
  • Questions about EFL

    Questions about EFL

    作者您好,感谢您精彩的工作。我对于下图展示的EFL的公式有一个疑惑希望您能帮我解答一下: (1)您提及到对于class imbalance严重的类,gamma值应该要很大,但是同样会带来这一类对于最终loss contributions变少,所以您又加上一个weight factor,但是我在复现您代码的时候有个问题,就是您是分别计算每一类的损失然后求和,那您再计算某一类loss的时候,其他类的gamma值是怎么确定的?直接赋值为0吗?因为在您代码中您是直接乘上一个计算得到的gamma(第二个图),但是这样对于某一类而言,只是class imbalance类的gamma值非常小但是background类的gamma值却有时比较大,您可以再细化一下这个公式的解释吗,谢谢! image image

    opened by qdd1234 2
  • YOLOX QAT成功,精度无下降,但是在deploy to tengine的时候出现多个问题

    YOLOX QAT成功,精度无下降,但是在deploy to tengine的时候出现多个问题

    基于UP量化YOLOX,QAT训练成功,精度无下降,但是在deploy到tengine的时候出现多个问题, 1:miss key, 缺少量化节点 971cdb10062985847adad7eaf8c45e1

    2:fake_quantize_per_tensor_affine() received an invalid combination of arguments 导出onnx时出现参数不对应的情况,以下为报错日志 deploy_tengine_error.txt

    环境: env: Ubuntu 20.04 RTX3060TI CUDA: 11.4 Name: torch Version: 1.10.0+cu111 Name: MQBench Version: 0.0.6 onnx 1.7.0

    opened by RedHandLM 1
  • from .._C import xxx, error

    from .._C import xxx, error

    hi, i'm got the error 'ImportError: cannot import name 'naive_nms' from 'up.extensions.csrc' (unknown location)'

    e2a77bc3246af8301ca528b4df54a23c

    for the naive_nms, it's not in csrc? where it is?

    opened by EthanChen1234 1
  • BigNas demo

    BigNas demo

    opened by howardgriffin 1
  • Add resnet-ssd

    Add resnet-ssd

    Train Command :

    python -u -m up train --ng=2 --nm=1 --launch=pytorch --config=configs/det/ssd/ssd-r34-300.yaml --display=100

    Resnet34-SSD Get Result: Ave mAP: 0.253

    Resnet34-SSD 4bit asymmetry per_channel quant get result: ave mAP 0.219

    opened by wangshankun 0
  • 请问这个框架不支持蒸馏吗?

    请问这个框架不支持蒸馏吗?

    Traceback (most recent call last): File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/data/juicefs_hz_cv_v3/11105507/AAAI2022/united-perception2/up/main.py", line 31, in main() File "/data/juicefs_hz_cv_v3/11105507/AAAI2022/united-perception2/up/main.py", line 25, in main args.run(args) File "/data/juicefs_hz_cv_v3/11105507/AAAI2022/united-perception2/up/commands/train.py", line 161, in _main launch(main, args.num_gpus_per_machine, args.num_machines, args=args, start_method=args.fork_method) File "/data/juicefs_hz_cv_v3/11105507/AAAI2022/united-perception2/up/utils/env/launch.py", line 68, in launch main_func(*(args,)) File "/data/juicefs_hz_cv_v3/11105507/AAAI2022/united-perception2/up/commands/train.py", line 140, in main runner = RUNNER_REGISTRY.get(runner_cfg['type'])(cfg, **runner_cfg['kwargs']) File "/data/juicefs_hz_cv_v3/11105507/AAAI2022/united-perception2/up/utils/general/registry.py", line 81, in get assert module_name in self, '{} is not supported, avaiables are:{}'.format(module_name, self) AssertionError: kd is not supported, avaiables are:{'base': <class 'up.runner.base_runner.BaseRunner'>}

    opened by ersanliqiao 5
Releases(v0.3.0_github)
Owner
Model Infra
Code for generating the figures in the paper "Capacity of Group-invariant Linear Readouts from Equivariant Representations: How Many Objects can be Linearly Classified Under All Possible Views?"

Code for running simulations for the paper "Capacity of Group-invariant Linear Readouts from Equivariant Representations: How Many Objects can be Lin

Matthew Farrell 1 Nov 22, 2022
Multi-Person Extreme Motion Prediction

Multi-Person Extreme Motion Prediction Implementation for paper Wen Guo, Xiaoyu Bie, Xavier Alameda-Pineda, Francesc Moreno-Noguer, Multi-Person Extre

GUO-W 38 Nov 15, 2022
Aligning Latent and Image Spaces to Connect the Unconnectable

About This repo contains the official implementation of the Aligning Latent and Image Spaces to Connect the Unconnectable paper. It is a GAN model whi

Ivan Skorokhodov 203 Jan 03, 2023
[Pedestron] Generalizable Pedestrian Detection: The Elephant In The Room. @ CVPR2021

Pedestron Pedestron is a MMdetection based repository, that focuses on the advancement of research on pedestrian detection. We provide a list of detec

Irtiza Hasan 594 Jan 05, 2023
The code for SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network.

SAG-DTA The code is the implementation for the paper 'SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network'. Requirements py

Shugang Zhang 7 Aug 02, 2022
Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition"

CLIPstyler Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition" Environment Pytorch 1.7.1, Python 3.6 $ c

201 Dec 29, 2022
Learning where to learn - Gradient sparsity in meta and continual learning

Learning where to learn - Gradient sparsity in meta and continual learning In this paper, we investigate gradient sparsity found by MAML in various co

Johannes Oswald 28 Dec 09, 2022
A library that can print Python objects in human readable format

objprint A library that can print Python objects in human readable format Install pip install objprint Usage op Use op() (or objprint()) to print obj

319 Dec 25, 2022
Code implementation of Data Efficient Stagewise Knowledge Distillation paper.

Data Efficient Stagewise Knowledge Distillation Table of Contents Data Efficient Stagewise Knowledge Distillation Table of Contents Requirements Image

IvLabs 112 Dec 02, 2022
Official implementation of "Articulation Aware Canonical Surface Mapping"

Articulation-Aware Canonical Surface Mapping Nilesh Kulkarni, Abhinav Gupta, David F. Fouhey, Shubham Tulsiani Paper Project Page Requirements Python

Nilesh Kulkarni 56 Dec 16, 2022
SporeAgent: Reinforced Scene-level Plausibility for Object Pose Refinement

SporeAgent: Reinforced Scene-level Plausibility for Object Pose Refinement This repository implements the approach described in SporeAgent: Reinforced

Dominik Bauer 5 Jan 02, 2023
A model to classify a piece of news as REAL or FAKE

Fake_news_classification A model to classify a piece of news as REAL or FAKE. This python project of detecting fake news deals with fake and real news

Gokul Stark 1 Jan 29, 2022
Cosine Annealing With Warmup

CosineAnnealingWithWarmup Formulation The learning rate is annealed using a cosine schedule over the course of learning of n_total total steps with an

zhuyun 4 Apr 18, 2022
Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation

DynaBOA Code repositoty for the paper: Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation Shanyan Guan, Jingwei Xu, Michell

198 Dec 29, 2022
Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"

Photo-Realistic-Super-Resoluton Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" [Paper]

Harry Yang 199 Dec 01, 2022
Code for the paper "Asymptotics of ℓ2 Regularized Network Embeddings"

README Code for the paper Asymptotics of L2 Regularized Network Embeddings. Requirements Requires Stellargraph 1.2.1, Tensorflow 2.6.0, scikit-learm 0

Andrew Davison 0 Jan 06, 2022
Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation

Implicit Internal Video Inpainting Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation paper | project

202 Dec 30, 2022
Repository of the paper Compressing Sensor Data for Remote Assistance of Autonomous Vehicles using Deep Generative Models at ML4AD @ NeurIPS 2021.

Compressing Sensor Data for Remote Assistance of Autonomous Vehicles using Deep Generative Models Code and supplementary materials Repository of the p

Daniel Bogdoll 4 Jul 13, 2022
HMLET (Hybrid-Method-of-Linear-and-non-linEar-collaborative-filTering-method)

Methods HMLET (Hybrid-Method-of-Linear-and-non-linEar-collaborative-filTering-method) Dynamically selecting the best propagation method for each node

Yong 7 Dec 18, 2022
PyTorch code accompanying our paper on Maximum Entropy Generators for Energy-Based Models

Maximum Entropy Generators for Energy-Based Models All experiments have tensorboard visualizations for samples / density / train curves etc. To run th

Rithesh Kumar 135 Oct 27, 2022