Easy and Efficient Object Detector

Related tags

Deep LearningEOD
Overview

EOD

image

Easy and Efficient Object Detector

EOD (Easy and Efficient Object Detection) is a general object detection model production framework. It aim on provide two key feature about Object Detection:

  • Efficient: we will focus on training VERY HIGH ACCURARY single-shot detection model, and model compression (quantization/sparsity) will be well addressed.
  • Easy: easy to use, easy to add new features(backbone/head/neck), easy to deploy.
  • Large-Scale Dataset Training Detail
  • Equalized Focal Loss for Dense Long-Tailed Object Detection EFL
  • Improve-YOLOX YOLOX-RET
  • Quantization Aware Training(QAT) interface based on MQBench.

The master branch works with PyTorch 1.8.1. Due to the pytorch version, it can not well support the 30 series graphics card hardware.

Install

pip install -r requirments

Get Started

Some example scripts are supported in scripts/.

Export Module

Export eod into ROOT and PYTHONPATH

ROOT=../../
export ROOT=$ROOT
export PYTHONPATH=$ROOT:$PYTHONPATH

Train

Step1: edit meta_file and image_dir of image_reader:

dataset:
  type: coco # dataset type
    kwargs:
      source: train
      meta_file: coco/annotations/instances_train2017.json 
      image_reader:
        type: fs_opencv
        kwargs:
          image_dir: coco/train2017
          color_mode: BGR

Step2: train

python -m eod train --config configs/det/yolox/yolox_tiny.yaml --nm 1 --ng 8 --launch pytorch 2>&1 | tee log.train
  • --config: yamls in configs/
  • --nm: machine number
  • --ng: gpu number for each machine
  • --launch: slurm or pytorch

Step3: fp16, add fp16 setting into runtime config

runtime:
    fp16: True

Eval

Step1: edit config of evaluating dataset

Step2: test

python -m eod train -e --config configs/det/yolox/yolox_tiny.yaml --nm 1 --ng 1 --launch pytorch 2>&1 | tee log.test

Demo

Step1: add visualizer config in yaml

inference:
  visualizer:
    type: plt
    kwargs:
      class_names: ['__background__', 'person'] # class names
      thresh: 0.5

Step2: inference

python -m eod inference --config configs/det/yolox/yolox_tiny.yaml --ckpt ckpt_tiny.pth -i imgs -v vis_dir
  • --ckpt: model for inferencing
  • -i: images directory or single image
  • -v: directory saving visualization results

Mpirun mode

EOD supports mpirun mode to launch task, MPI needs to be installed firstly

# download mpich
wget https://www.mpich.org/static/downloads/3.2.1/mpich-3.2.1.tar.gz # other versions: https://www.mpich.org/static/downloads/

tar -zxvf mpich-3.2.1.tar.gz
cd mpich-3.2.1
./configure  --prefix=/usr/local/mpich-3.2.1
make && make install

Launch task

mpirun -np 8 python -m eod train --config configs/det/yolox/yolox_tiny.yaml --launch mpi 2>&1 | tee log.train
  • Add mpirun -np x; x indicates number of processes
  • Mpirun is convenient to debug with pdb
  • --launch: mpi

Custom Example

Benckmark

Quick Run

Tutorials

Useful Tools

References

Acknowledgments

Thanks to all past contributors, especially opcoder,

Comments
  • Questions about EFL code implementation

    Questions about EFL code implementation

    Hello, can you answer the mechanism of action of the gradient collection function? Although the gradient gathering function is defined in the forward propagation function, it does not seem to call this function. Even if self.pos_neg.detach() is used, what is the input parameter in the collect_grad() function? Does it really work?

    image

    opened by xc-chengdu 8
  • How to use quant_runner

    How to use quant_runner

    Thank you for the excellent work of MQBench and EOD. I am interested in the work of quantization and I have tried the config of retinanet-r50_1x_quant.yaml. However, there are some errors. Besides, I found that there is no quantitative document in this project. Can you give some suggestions to use the quant_runner.

    Here are the errors I encountered when use retinanet-r50_1x_quant.yaml:

    error_1

    File "/home/user/miniconda3/envs/eod/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/home/user/project/EOD/eod/utils/env/launch.py", line 117, in _distributed_worker main_func(args) File "/home/user/project/EOD/eod/commands/train.py", line 121, in main runner = RUNNER_REGISTRY.get(runner_cfg['type'])(cfg, **runner_cfg['kwargs']) File "/home/user/project/EOD/eod/runner/quant_runner.py", line 14, in init super(QuantRunner, self).init(config, work_dir, training) File "/home/user/project/EOD/eod/runner/base_runner.py", line 52, in init self.build() File "/home/user/project/EOD/eod/runner/quant_runner.py", line 32, in build self.quantize_model() File "/home/user/project/EOD/eod/runner/quant_runner.py", line 68, in quantize_model from mqbench.prepare_by_platform import prepare_by_platform ImportError: cannot import name 'prepare_by_platform' from 'mqbench.prepare_by_platform' (/home/user/project/MQBench/mqbench/prepare_by_platform.py)

    solved by modifying the EOD/eod/runner/quant_runner.py 68-72:

    from mqbench.prepare_by_platform import prepare_qat_fx_by_platform
    logger.info("prepare quantize model")
    deploy_backend = self.config['quant']['deploy_backend']
    prepare_args = self.config['quant'].get('prepare_args', {})
    self.model = prepare_qat_fx_by_platform(self.model, self.backend_type[deploy_backend], prepare_args)
    

    error_2

    I can use single gpu train the quant model, but when using multiple gpus I meet the error below, which is still unsolved.

    Traceback (most recent call last): File "/home/user/miniconda3/envs/eod/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/home/user/project/EOD/eod/utils/env/launch.py", line 117, in _distributed_worker main_func(args) File "/home/user/project/EOD/eod/commands/train.py", line 121, in main runner = RUNNER_REGISTRY.get(runner_cfg['type'])(cfg, **runner_cfg['kwargs']) File "/home/user/project/EOD/eod/runner/quant_runner.py", line 15, in init super(QuantRunner, self).init(config, work_dir, training) File "/home/user/project/EOD/eod/runner/base_runner.py", line 52, in init self.build() File "/home/user/project/EOD/eod/runner/quant_runner.py", line 34, in build self.calibrate() File "/home/user/project/EOD/eod/runner/quant_runner.py", line 84, in calibrate self.model(batch) File "/home/user/miniconda3/envs/eod/lib/python3.8/site-packages/torch/fx/graph_module.py", line 513, in wrapped_call raise e.with_traceback(None) NameError: name 'dist' is not defined

    opened by feixiang7701 8
  • EOD/eod/models/heads/utils/bbox_helper.py

    EOD/eod/models/heads/utils/bbox_helper.py", line 341, in clip_bbox dw, dh = img_size[6], img_size[7] IndexError: list index out of range

    I found in Inferece.py the prepare are:

    def fetch_single(self, filename):
            img = self.image_reader.read(filename)
            data = EasyDict(
                {"filename": filename, "origin_image": img, "image": img, "flipped": False}
            )
            data = self.transformer(data)
            scale_factor = data.get("scale_factor", 1)
    
            image_h, image_w = get_image_size(img)
            new_image_h, new_image_w = get_image_size(data.image)
            data.image_info = [
                new_image_h,
                new_image_w,
                scale_factor,
                image_h,
                image_w,
                data.flipped,
                filename,
            ]
            data.image = data.image.cuda()
            return data
    

    which image_info max size is 7, so that above index [7] is out of indices. How to resolve?

    opened by jinfagang 5
  • no module named 'petrel_client', no module named 'spring_aux', import error No module named 'mqbench', import error No module named 'msbench.nn', free(): invalid pointer.

    no module named 'petrel_client', no module named 'spring_aux', import error No module named 'mqbench', import error No module named 'msbench.nn', free(): invalid pointer.

    Hello, 1、no module named 'petrel_client',; 2、no module named 'spring_aux',; 3、import error No module named 'mqbench'; 4、 import error No module named 'msbench.nn',; 5、free(): invalid pointer. Can you tell me how to tackle those problem?

    opened by trhao 4
  • Is sigmoid classifier suitable for multi-classification(num of categories > 1000 in LVIS) problems?

    Is sigmoid classifier suitable for multi-classification(num of categories > 1000 in LVIS) problems?

    Since you are based on the sigmoid classifier, I am curious if your detection results on LVIS will have many false positives in the same location but with different categories. The reason why I ask this is that, I used to train one-stage detector on datasets similar with LVIS (which is long tailed logo dataset, with 352 categories), however, I get many FP with different categories at the same location. I'm wondering if you have encountered the same situation. Thanks! alfaromeo5 And I think it may be due to the use of sigmoid classifier which consists of multiple independent binary classifiers. It may be not suitable for multi-classification(num of categories > 1000). Of course this is just my conjecture, any advice is welcome...

    opened by Icecream-blue-sky 4
  • [Urgent!!!]Where are kd_runner and bignas_runner? Why aren't there any branches of the warehouse? Is it because the code is not fully uploaded??????

    [Urgent!!!]Where are kd_runner and bignas_runner? Why aren't there any branches of the warehouse? Is it because the code is not fully uploaded??????

    When I tried the knowledge distillation and model search parts of the code, I had problems where kd_runner could not be found and bignas_runner could not be found, respectively. Is the warehouse code uploaded incomplete?

    opened by TheWangYang 2
  • (Resolved!!!) No module named 'petrel_client' init petrel failed No module named 'spring_aux' ImportError:  cannot import name 'gpu_iou_overlap' from 'up.extensions'.

    (Resolved!!!) No module named 'petrel_client' init petrel failed No module named 'spring_aux' ImportError: cannot import name 'gpu_iou_overlap' from 'up.extensions'.

    The following error occurs when the environment is configured and the following command is executed (the same error occurs on both Windows and Linux platforms) :

    sh scripts/dist_train.sh 2 configs/cls/resnet/resnet18.yaml
    
    No module named 'petrel_client'
    init petrel failed
    No module named 'spring_aux'
    
    2022-11-28 00:53:13,270-rk0-normalize.py#38:import error No module named 'mqbench'; If you need Mqbench to quantize model,      you should add Mqbench to this project. Or just ignore this error.
    2022-11-28 00:53:13,270-rk0-normalize.py#45:import error No module named 'msbench'; If you need Msbench to prune model,     you should add Msbench to this project. Or just ignore this error.
    
    Traceback (most recent call last):
    File "D:\anaconda3\envs\python37\lib\runpy.py", line 183, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
    File "D:\anaconda3\envs\python37\lib\runpy.py", line 142, in _get_module_details
    return _get_module_details(pkg_main_name, error)
    File "D:\anaconda3\envs\python37\lib\runpy.py", line 109, in _get_module_details
    __import__(pkg_name)
    File "D:\pycharm_work_place\United-Perception\up\__init__.py", line 26, in <module></module>
    from .tasks import *
    File "D:\pycharm_work_place\United-Perception\up\tasks\__init__.py", line 24, in <module></module>
    globals()[fp] = importlib.import_module('.' + fp, __package__)
    File "D:\anaconda3\envs\python37\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\__init__.py", line 2, in <module></module>
    from .models import * # noqa
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\__init__.py", line 1, in <module></module>
    from .heads import * # noqa
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\heads\__init__.py", line 2, in <module></module>
    from .bbox_head import *  # noqa
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\heads\bbox_head\__init__.py", line 1, in <module></module>
    from .bbox_head import * # noqa
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\heads\bbox_head\bbox_head.py", line 6, in <module></module>
    from up.tasks.det.models.utils.assigner import map_rois_to_level
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\utils\__init__.py", line 3, in <module></module>
    from .matcher import * # noqa
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\utils\matcher.py", line 6, in <module></module>
    from up.tasks.det.models.utils.bbox_helper import offset2bbox
    File "D:\pycharm_work_place\United-Perception\up\tasks\det\models\utils\bbox_helper.py", line 10, in <module></module>
    from up.extensions import gpu_iou_overlap
    
    ImportError:  cannot import name 'gpu_iou_overlap' from 'up.extensions'  (D:\pycharm_work_place\United-Perception\up\extensions\__init__.py)
    
    

    Why can't the module be imported? What should I do? I hope the kind people can help me solve the problem, I will be grateful!

    opened by TheWangYang 2
  • 'Conv2d' object has no attribute 'register_full_backward_hook'

    'Conv2d' object has no attribute 'register_full_backward_hook'

    您好,我运行样例的eval或者train都会提示这个错误,这个是和安装时编译有关吗,按说是安装成功了,感谢帮助

    Traceback (most recent call last): File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/CondaEnv/UnitedDetection/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/CondaEnv/UnitedDetection/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/main.py", line 27, in main() File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/main.py", line 21, in main args.run(args) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/commands/train.py", line 161, in _main launch(main, args.num_gpus_per_machine, args.num_machines, args=args, start_method=args.fork_method) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/env/launch.py", line 68, in launch main_func(*(args,)) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/commands/train.py", line 140, in main runner = RUNNER_REGISTRY.get(runner_cfg['type'])(cfg, **runner_cfg['kwargs']) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/runner/base_runner.py", line 60, in init self.build() File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/runner/base_runner.py", line 103, in build self.build_hooks() File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/runner/base_runner.py", line 296, in build_hooks self._hooks = build_hooks(self, cfg_hooks, add_log_if_not_exists=True) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/general/hook_helper.py", line 1114, in build_hooks hooks = [build_single_hook(cfg) for cfg in cfg_list] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/general/hook_helper.py", line 1114, in hooks = [build_single_hook(cfg) for cfg in cfg_list] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/general/hook_helper.py", line 1109, in build_single_hook return HOOK_REGISTRY.build(cfg) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/general/registry.py", line 111, in build raise e File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/general/registry.py", line 101, in build return build_fn(**obj_kwargs) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/United-Perception/up/utils/general/hook_helper.py", line 593, in init m.register_full_backward_hook(_backward_fn_hook) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-vacv/kaichaoliang/CondaEnv/UnitedDetection/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in getattr type(self).name, name)) torch.nn.modules.module.ModuleAttributeError: 'Conv2d' object has no attribute 'register_full_backward_hook'

    opened by KaichaoLiang 2
  • Questions about EFL

    Questions about EFL

    作者您好,感谢您精彩的工作。我对于下图展示的EFL的公式有一个疑惑希望您能帮我解答一下: (1)您提及到对于class imbalance严重的类,gamma值应该要很大,但是同样会带来这一类对于最终loss contributions变少,所以您又加上一个weight factor,但是我在复现您代码的时候有个问题,就是您是分别计算每一类的损失然后求和,那您再计算某一类loss的时候,其他类的gamma值是怎么确定的?直接赋值为0吗?因为在您代码中您是直接乘上一个计算得到的gamma(第二个图),但是这样对于某一类而言,只是class imbalance类的gamma值非常小但是background类的gamma值却有时比较大,您可以再细化一下这个公式的解释吗,谢谢! image image

    opened by qdd1234 2
  • YOLOX QAT成功,精度无下降,但是在deploy to tengine的时候出现多个问题

    YOLOX QAT成功,精度无下降,但是在deploy to tengine的时候出现多个问题

    基于UP量化YOLOX,QAT训练成功,精度无下降,但是在deploy到tengine的时候出现多个问题, 1:miss key, 缺少量化节点 971cdb10062985847adad7eaf8c45e1

    2:fake_quantize_per_tensor_affine() received an invalid combination of arguments 导出onnx时出现参数不对应的情况,以下为报错日志 deploy_tengine_error.txt

    环境: env: Ubuntu 20.04 RTX3060TI CUDA: 11.4 Name: torch Version: 1.10.0+cu111 Name: MQBench Version: 0.0.6 onnx 1.7.0

    opened by RedHandLM 1
  • from .._C import xxx, error

    from .._C import xxx, error

    hi, i'm got the error 'ImportError: cannot import name 'naive_nms' from 'up.extensions.csrc' (unknown location)'

    e2a77bc3246af8301ca528b4df54a23c

    for the naive_nms, it's not in csrc? where it is?

    opened by EthanChen1234 1
  • BigNas demo

    BigNas demo

    opened by howardgriffin 1
  • Add resnet-ssd

    Add resnet-ssd

    Train Command :

    python -u -m up train --ng=2 --nm=1 --launch=pytorch --config=configs/det/ssd/ssd-r34-300.yaml --display=100

    Resnet34-SSD Get Result: Ave mAP: 0.253

    Resnet34-SSD 4bit asymmetry per_channel quant get result: ave mAP 0.219

    opened by wangshankun 0
  • 请问这个框架不支持蒸馏吗?

    请问这个框架不支持蒸馏吗?

    Traceback (most recent call last): File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/data/juicefs_hz_cv_v3/11105507/AAAI2022/united-perception2/up/main.py", line 31, in main() File "/data/juicefs_hz_cv_v3/11105507/AAAI2022/united-perception2/up/main.py", line 25, in main args.run(args) File "/data/juicefs_hz_cv_v3/11105507/AAAI2022/united-perception2/up/commands/train.py", line 161, in _main launch(main, args.num_gpus_per_machine, args.num_machines, args=args, start_method=args.fork_method) File "/data/juicefs_hz_cv_v3/11105507/AAAI2022/united-perception2/up/utils/env/launch.py", line 68, in launch main_func(*(args,)) File "/data/juicefs_hz_cv_v3/11105507/AAAI2022/united-perception2/up/commands/train.py", line 140, in main runner = RUNNER_REGISTRY.get(runner_cfg['type'])(cfg, **runner_cfg['kwargs']) File "/data/juicefs_hz_cv_v3/11105507/AAAI2022/united-perception2/up/utils/general/registry.py", line 81, in get assert module_name in self, '{} is not supported, avaiables are:{}'.format(module_name, self) AssertionError: kd is not supported, avaiables are:{'base': <class 'up.runner.base_runner.BaseRunner'>}

    opened by ersanliqiao 5
Releases(v0.3.0_github)
Owner
Model Infra
This repository contains demos I made with the Transformers library by HuggingFace.

Transformers-Tutorials Hi there! This repository contains demos I made with the Transformers library by 🤗 HuggingFace. Currently, all of them are imp

3.5k Jan 01, 2023
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners

DART Implementation for ICLR2022 paper Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners. Environment

ZJUNLP 83 Dec 27, 2022
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR2022)[paper] Authors: Chenhang He, Ruihuang Li, Shuai Li, L

Billy HE 141 Dec 30, 2022
TensorFlow-based implementation of "Pyramid Scene Parsing Network".

PSPNet_tensorflow Important Code is fine for inference. However, the training code is just for reference and might be only used for fine-tuning. If yo

HsuanKung Yang 323 Dec 20, 2022
Xintao 1.4k Dec 25, 2022
This repository contains the exercises and its solution contained in the book "An Introduction to Statistical Learning" in python.

An-Introduction-to-Statistical-Learning This repository contains the exercises and its solution contained in the book An Introduction to Statistical L

2.1k Jan 02, 2023
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

101 Nov 25, 2022
Pytorch implementation of "Get To The Point: Summarization with Pointer-Generator Networks"

About this repository This repo contains an Pytorch implementation for the ACL 2017 paper Get To The Point: Summarization with Pointer-Generator Netwo

wxDai 7 Oct 14, 2022
This application is the basic of automated online-class-joiner(for YıldızEdu) within the right time. Gets the ZOOM link by scheduled date and time.

This application is the basic of automated online-class-joiner(for YıldızEdu) within the right time. Gets the ZOOM link by scheduled date and time.

215355 1 Dec 16, 2021
Fuzzing JavaScript Engines with Aspect-preserving Mutation

DIE Repository for "Fuzzing JavaScript Engines with Aspect-preserving Mutation" (in S&P'20). You can check the paper for technical details. Environmen

gts3.org (<a href=[email protected])"> 190 Dec 11, 2022
Deep Learning Theory

Deep Learning Theory 整理了一些深度学习的理论相关内容,持续更新。 Overview Recent advances in deep learning theory 总结了目前深度学习理论研究的六个方向的一些结果,概述型,没做深入探讨(2021)。 1.1 complexity

fq 103 Jan 04, 2023
i3DMM: Deep Implicit 3D Morphable Model of Human Heads

i3DMM: Deep Implicit 3D Morphable Model of Human Heads CVPR 2021 (Oral) Arxiv | Poject Page This project is the official implementation our work, i3DM

Tarun Yenamandra 60 Jan 03, 2023
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.3k Dec 29, 2022
SAFL: A Self-Attention Scene Text Recognizer with Focal Loss

SAFL: A Self-Attention Scene Text Recognizer with Focal Loss This repository implements the SAFL in pytorch. Installation conda env create -f environm

6 Aug 24, 2022
Inhomogeneous Social Recommendation with Hypergraph Convolutional Networks

Inhomogeneous Social Recommendation with Hypergraph Convolutional Networks This is our Pytorch implementation for the paper: Zirui Zhu, Chen Gao, Xu C

Zirui Zhu 3 Dec 30, 2022
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.

counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code

Networks Learning 11 Dec 09, 2022
Wide Residual Networks (WideResNets) in PyTorch

Wide Residual Networks (WideResNets) in PyTorch WideResNets for CIFAR10/100 implemented in PyTorch. This implementation requires less GPU memory than

Jason Kuen 296 Dec 27, 2022
Source code for "Progressive Transformers for End-to-End Sign Language Production" (ECCV 2020)

Progressive Transformers for End-to-End Sign Language Production Source code for "Progressive Transformers for End-to-End Sign Language Production" (B

58 Dec 21, 2022
Code Release for ICCV 2021 (oral), "AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds"

AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds (ICCV 2021 oral) **Project Page | Arxiv ** Runsong Zhu¹, Yuan Liu², Zhen Dong¹, Te

40 Dec 30, 2022