edge-SR: Super-Resolution For The Masses

Related tags

Text Data & NLPeSR
Overview

edge-SR: Super Resolution For The Masses

Citation

Pablo Navarrete Michelini, Yunhua Lu and Xingqun Jiang. "edge-SR: Super-Resolution For The Masses", in IEEE Winter conference on Applications of Computer Vision (WACV), 2022.

BibTeX

@inproceedings{eSR,
    title     = {edge--{SR}: Super--Resolution For The Masses},
    author    = {Navarrete~Michelini, Pablo and Lu, Yunhua and Jiang, Xingqun},
    booktitle = {Proceedings of the {IEEE/CVF} Winter Conference on Applications of Computer Vision ({WACV})},
    month     = {January},
    year      = {2022},
    pages     = {1078--1087},
    url       = {https://arxiv.org/abs/2108.10335}
}

Instructions:

  • Place input images in input directory (provided as empty directory). Color images will be converted to grayscale.

  • To upscale images run: python run.py.

    Output images will come out in output directory.

  • The GPU number and model file can be changed in run.py (in comment "CHANGE HERE").

Requirements:

  • Python 3, PyTorch, NumPy, Pillow, OpenCV

Experiment results

  • The data directory contains the file tests.pkl that has the Python dictionary with all our test results on different devices. The following sample code shows how to read the file:
>>> import pickle
>>> test = pickle.load(open('tests.pkl', 'rb'))
>>> test['Bicubic_s2']
    {'psnr_Set5': 33.72849620514912,
     'ssim_Set5': 0.9283912810369976,
     'lpips_Set5': 0.14221979230642318,
     'psnr_Set14': 30.286027790636204,
     'ssim_Set14': 0.8694934108301432,
     'lpips_Set14': 0.19383049915943826,
     'psnr_BSDS100': 29.571233006609656,
     'ssim_BSDS100': 0.8418117904964167,
     'lpips_BSDS100': 0.26246454380452633,
     'psnr_Urban100': 26.89378248655882,
     'ssim_Urban100': 0.8407461069831571,
     'lpips_Urban100': 0.21186692919582129,
     'psnr_Manga109': 30.850672809780587,
     'ssim_Manga109': 0.9340133711400112,
     'lpips_Manga109': 0.102985977955641,
     'parameters': 104,
     'speed_AGX': 18.72132628065749,
     'power_AGX': 1550,
     'speed_MaxQ': 632.5429857814075,
     'power_MaxQ': 50,
     'temperature_MaxQ': 76,
     'memory_MaxQ': 2961,
     'speed_RPI': 11.361346064182795,
     'usage_RPI': 372.8714285714285}

The keys of the dictionary identify the name of each model and its hyper--parameters using the following format:

  • Bicubic_s#,
  • eSR-MAX_s#_K#_C#,
  • eSR-TM_s#_K#_C#,
  • eSR-TR_s#_K#_C#,
  • eSR-CNN_s#_C#_D#_S#,
  • ESPCN_s#_D#_S#, or
  • FSRCNN_s#_D#_S#_M#,

where # represents an integer number with the value of the correspondent hyper-parameter. For each model the data of the dictionary contains a second dictionary with the information displayed above. This includes: number of model parameters; image quality metrics PSNR, SSIM and LPIPS measured in 5 different datasets; as well as power, speed, CPU usage, temperature and memory usage for devices AGX (Jetson AGX Xavier), MaxQ (GTX 1080 MaxQ) and RPI (Raspberry Pi 400).

Owner
Pablo
Pablo
nlpcommon is a python Open Source Toolkit for text classification.

nlpcommon nlpcommon, Python Text Tool. Guide Feature Install Usage Dataset Contact Cite Reference Feature nlpcommon is a python Open Source

xuming 3 May 29, 2022
This code is the implementation of Text Emotion Recognition (TER) with linguistic features

APSIPA-TER This code is the implementation of Text Emotion Recognition (TER) with linguistic features. The network model is BERT with a pretrained mod

kenro515 1 Feb 08, 2022
A python framework to transform natural language questions to queries in a database query language.

__ _ _ _ ___ _ __ _ _ / _` | | | |/ _ \ '_ \| | | | | (_| | |_| | __/ |_) | |_| | \__, |\__,_|\___| .__/ \__, | |_| |_| |___/

Machinalis 1.2k Dec 18, 2022
PyTorch implementation of NATSpeech: A Non-Autoregressive Text-to-Speech Framework

A Non-Autoregressive Text-to-Speech (NAR-TTS) framework, including official PyTorch implementation of PortaSpeech (NeurIPS 2021) and DiffSpeech (AAAI 2022)

760 Jan 03, 2023
Stuff related to Ben Eater's 8bit breadboard computer

8bit breadboard computer simulator This is an assembler + simulator/emulator of Ben Eater's 8bit breadboard computer. For a version with its RAM upgra

Marijn van Vliet 29 Dec 29, 2022
Host your own GPT-3 Discord bot

GPT3 Discord Bot Host your own GPT-3 Discord bot i'd host and make the bot invitable myself, however GPT3 terms of service prohibit public use of GPT3

[something hillarious here] 8 Jan 07, 2023
A paper list of pre-trained language models (PLMs).

Large-scale pre-trained language models (PLMs) such as BERT and GPT have achieved great success and become a milestone in NLP.

RUCAIBox 124 Jan 02, 2023
Unsupervised text tokenizer focused on computational efficiency

YouTokenToMe YouTokenToMe is an unsupervised text tokenizer focused on computational efficiency. It currently implements fast Byte Pair Encoding (BPE)

VK.com 847 Dec 19, 2022
Code-autocomplete, a code completion plugin for Python

Code AutoComplete code-autocomplete, a code completion plugin for Python.

xuming 13 Jan 07, 2023
A list of NLP(Natural Language Processing) tutorials

NLP Tutorial A list of NLP(Natural Language Processing) tutorials built on PyTorch. Table of Contents A step-by-step tutorial on how to implement and

Allen Lee 1.3k Dec 25, 2022
Natural Language Processing Best Practices & Examples

NLP Best Practices In recent years, natural language processing (NLP) has seen quick growth in quality and usability, and this has helped to drive bus

Microsoft 6.1k Dec 31, 2022
null

CP-Cluster Confidence Propagation Cluster aims to replace NMS-based methods as a better box fusion framework in 2D/3D Object detection, Instance Segme

Yichun Shen 41 Dec 08, 2022
TextFlint is a multilingual robustness evaluation platform for natural language processing tasks,

TextFlint is a multilingual robustness evaluation platform for natural language processing tasks, which unifies general text transformation, task-specific transformation, adversarial attack, sub-popu

TextFlint 587 Dec 20, 2022
KoBERT - Korean BERT pre-trained cased (KoBERT)

KoBERT KoBERT Korean BERT pre-trained cased (KoBERT) Why'?' Training Environment Requirements How to install How to use Using with PyTorch Using with

SK T-Brain 1k Jan 02, 2023
Codename generator using WordNet parts of speech database

codenames Codename generator using WordNet parts of speech database References: https://possiblywrong.wordpress.com/2021/09/13/code-name-generator/ ht

possiblywrong 27 Oct 30, 2022
A natural language processing model for sequential sentence classification in medical abstracts.

NLP PubMed Medical Research Paper Abstract (Randomized Controlled Trial) A natural language processing model for sequential sentence classification in

Hemanth Chandran 1 Jan 17, 2022
Sploitus - Command line search tool for sploitus.com. Think searchsploit, but with more POCs

Sploitus Command line search tool for sploitus.com. Think searchsploit, but with

watchdog2000 5 Mar 07, 2022
scikit-learn wrappers for Python fastText.

skift scikit-learn wrappers for Python fastText. from skift import FirstColFtClassifier df = pandas.DataFrame([['woof', 0], ['meow', 1]], colu

Shay Palachy 233 Sep 09, 2022
Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"

Status: Archive (code is provided as-is, no updates expected) Update August 2020: For an example repository that achieves state-of-the-art modeling pe

OpenAI 1.3k Dec 28, 2022
A Lightweight NLP Data Loader for All Deep Learning Frameworks in Python

LineFlow: Framework-Agnostic NLP Data Loader in Python LineFlow is a simple text dataset loader for NLP deep learning tasks. LineFlow was designed to

TofuNLP 177 Jan 04, 2023