Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices, ACM Multimedia 2021

Overview

Codes for ECBSR

Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices
Xindong Zhang, Hui Zeng, Lei Zhang
ACM Multimedia 2021

Codes

An older version implemented based on EDSR is place on /legacy folder. For more details, please refer to /legacy/README.md. The following is the lighten version implemented by us.

Dependencies & Installation

Please refer to the following simple steps for installation.

git clone https://github.com/xindongzhang/ECBSR.git
cd ECBSR
pip install -r requirements.txt

Training and benchmarking data can be downloaded from DIV2K and benchmark, respectively. Thanks for excellent work by EDSR.

Training & Testing

You could also try less/larger batch-size, if there are limited/enough hardware resources in your GPU-server. ECBSR is trained and tested with colors=1, e.g Y channel out of Ycbcr.

cd ECBSR

## ecbsr-m4c8-x2-prelu(you can revise the parameters of the yaml-config file accordding to your environments)
python train.py --config ./configs/ecbsr_x2_m4c8_prelu.yml

## ecbsr-m4c8-x4-prelu
python train.py --config ./configs/ecbsr_x4_m4c8_prelu.yml

## ecbsr-m4c16-x2-prelu
python train.py --config ./configs/ecbsr_x2_m4c16_prelu.yml

## ecbsr-m4c16-x4-prelu
python train.py --config ./configs/ecbsr_x4_m4c16_prelu.yml

Hardware deployment

Frontend conversion

We provide convertor for model conversion to different frontend, e.g. onnx/pb/tflite. We currently developed and tested the model with only one-channel(Y out of Ycbcr). Since the internal data-layout are quite different between tf(NHWC) and pytorch(NCHW), espetially for the pixelshuffle operation. Care must be taken to handle the data-layout, if you want to extend the pytorch-based training framework to RGB input data and deploy it on tensorflow. Follow are the demo scripts for model conversion to specific frontend:

## convert the trained pytorch model to onnx with plain-topology.
python convert.py --config xxx.yml --target_frontend onnx --output_folder XXX --inp_n 1 --inp_c 1 --inp_h 270 --inp_w 480

## convert the trained pytorch model to pb-1.x with plain-topology.
python convert.py --config xxx.yml --target_frontend pb-1.x --output_folder XXX --inp_n 1 --inp_c 1 --inp_h 270 --inp_w 480

## convert the trained pytorch model to pb-ckpt with plain-topology
python convert.py --config xxx.yml --target_frontend pb-ckpt --output_folder XXX --inp_n 1 --inp_c 1 --inp_h 270 --inp_w 480

AI-Benchmark

You can download the newest version of evaluation tool from AI-Benchmark. Then you can install the app via ADB tools,

adb install -r [name-of-ai-benchmar].apk

MNN (Come soon!)

For universal CPU & GPU of mobile hardware implementation.

RKNN (Come soon!)

For NPU inplementation of Rockchip hardware, e.g. RK3399Pro/RK1808.

MiniNet (Come soon!)

A super light-weight CNN inference framework implemented by us, with only conv-3x3, element-wise op, ReLU(PReLU) activations, and pixel-shuffle for common super resolution task. For more details, please refer to /ECBSR/deploy/mininet

Quantization tools (Come soon!)

For fixed-arithmetic quantization of image super resolution.

Citation


@article{zhang2021edge,
  title={Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices},
  author={Zhang, Xindong and Zeng, Hui and Zhang, Lei},
  booktitle={Proceedings of the 29th ACM International Conference on Multimedia (ACM MM)},
  year={2021}
}

Acknowledgement

Thanks EDSR for the pioneering work and excellent codebase! The implementation integrated with EDSR is placed on /legacy

Owner
xindong zhang
xindong zhang
ilpyt: imitation learning library with modular, baseline implementations in Pytorch

ilpyt The imitation learning toolbox (ilpyt) contains modular implementations of common deep imitation learning algorithms in PyTorch, with unified in

The MITRE Corporation 11 Nov 17, 2022
AI grand challenge 2020 Repo (Speech Recognition Track)

KorBERT를 활용한 한국어 텍스트 기반 위협 상황인지(2020 인공지능 그랜드 챌린지) 본 프로젝트는 ETRI에서 제공된 한국어 korBERT 모델을 활용하여 폭력 기반 한국어 텍스트를 분류하는 다양한 분류 모델들을 제공합니다. 본 개발자들이 참여한 2020 인공지

Young-Seok Choi 23 Jan 25, 2022
A Light CNN for Deep Face Representation with Noisy Labels

A Light CNN for Deep Face Representation with Noisy Labels Citation If you use our models, please cite the following paper: @article{wulight, title=

Alfred Xiang Wu 715 Nov 05, 2022
Siamese TabNet

Raifhack-DS-2021 https://raifhack.ru/ - Команда Звёздочка Siamese TabNet Сиамская TabNet предсказывает стоимость объекта недвижимости с price_type=1,

Daniel Gafni 15 Apr 16, 2022
PyTorch Implementation of PIXOR: Real-time 3D Object Detection from Point Clouds

PIXOR: Real-time 3D Object Detection from Point Clouds This is a custom implementation of the paper from Uber ATG using PyTorch 1.0. It represents the

Philip Huang 270 Dec 14, 2022
Official Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021)

TDEER 🦌 🦒 Official Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021) Overview TDEE

33 Dec 23, 2022
M2MRF: Many-to-Many Reassembly of Features for Tiny Lesion Segmentation in Fundus Images

M2MRF: Many-to-Many Reassembly of Features for Tiny Lesion Segmentation in Fundus Images This repo is the official implementation of paper "M2MRF: Man

12 Dec 14, 2022
Fully Connected DenseNet for Image Segmentation

Fully Connected DenseNets for Semantic Segmentation Fully Connected DenseNet for Image Segmentation implementation of the paper The One Hundred Layers

Somshubra Majumdar 84 Oct 31, 2022
Scalable Multi-Agent Reinforcement Learning

Scalable Multi-Agent Reinforcement Learning 1. Featured algorithms: Value Function Factorization with Variable Agent Sub-Teams (VAST) [1] 2. Implement

3 Aug 02, 2022
Code implementation of Data Efficient Stagewise Knowledge Distillation paper.

Data Efficient Stagewise Knowledge Distillation Table of Contents Data Efficient Stagewise Knowledge Distillation Table of Contents Requirements Image

IvLabs 112 Dec 02, 2022
A curated list of awesome Model-Based RL resources

Awesome Model-Based Reinforcement Learning This is a collection of research papers for model-based reinforcement learning (mbrl). And the repository w

OpenDILab 427 Jan 03, 2023
Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction

Welcome to Barlow Barlow is a tool for identifying the failure modes for a given neural network. To achieve this, Barlow first creates a group of imag

Sahil Singla 33 Dec 05, 2022
Scalable implementation of Lee / Mykland (2012) and Ait-Sahalia / Jacod (2012) Jump tests for noisy high frequency data

JumpDetectR Name of QuantLet : JumpDetectR Published in : 'To be published as "Jump dynamics in high frequency crypto markets"' Description : 'Scala

LvB 12 Jan 01, 2023
Simple ray intersection library similar to coldet - succedeed by libacc

Ray Intersection This project offers a header only acceleration structure library including implementations for a BVH- and KD-Tree. Applications may i

Nils Moehrle 29 Jun 23, 2022
Source Code For Template-Based Named Entity Recognition Using BART

Template-Based NER Source Code For Template-Based Named Entity Recognition Using BART Training Training train.py Inference inference.py Corpus ATIS (h

174 Dec 19, 2022
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.

English | 简体中文 Easy Parallel Library Overview Easy Parallel Library (EPL) is a general and efficient library for distributed model training. Usability

Alibaba 185 Dec 21, 2022
A python module for configuration of block devices

Blivet is a python module for system storage configuration. CI status Licence See COPYING Installation From Fedora repositories Blivet is available in

78 Dec 14, 2022
Multi-task Self-supervised Object Detection via Recycling of Bounding Box Annotations (CVPR, 2019)

Multi-task Self-supervised Object Detection via Recycling of Bounding Box Annotations (CVPR 2019) To make better use of given limited labels, we propo

126 Sep 13, 2022
Session-aware Item-combination Recommendation with Transformer Network

Session-aware Item-combination Recommendation with Transformer Network 2nd place (0.39224) code and report for IEEE BigData Cup 2021 Track1 Report EDA

Tzu-Heng Lin 6 Mar 10, 2022