This is a demo app to be used in the video streaming applications

Related tags

Deep LearningMoViDNN
Overview

MoViDNN: A Mobile Platform for Evaluating Video Quality Enhancement with Deep Neural Networks

MoViDNN is an Android application that can be used to evaluate DNN based video quality enhancements for mobile devices. We provide the structure to evaluate both super-resolution, and denoising/deblocking DNNs in this application. However, the structure can be extended easily to adapt to additional approaches such as video frame interpolation.

Moreover, MoViDNN can also be used as a Subjective test environment to evaulate DNN based enhancements.

We use tensorflow-lite as the DNN framework and FFMPEG for the video processing.

We also provide a Python repository that can be used to convert existing Tensorflow/Keras models to tensorflow-lite versions for Android. Preparation

DNN Evaluation

MoViDNN can be used as a platform to evaluate the performance of video quality enhancement DNNs. It provides objective metrics (PSNR and SSIM) for the whole video along with measuring the execution performance of the device (execution time, executed frames per second).

DNN Configuration

This is the first screen of the DNN test and in this screen the DNN, the accelerator, and input videos are selected which then will be used during the DNN evaluation.

DNN Execution

Once the configuration is completed, DNN execution activity is run. It begins with extracting each frame from the input video using FFMpeg and saving them into a temporary folder. Afterward, the DNN is applied for each frame, and results are saved into another temporary folder. Once the DNN applied frames are ready, they are converted to a video using FFMpeg again. Finally, objective metric calculations are done with FFMpeg using the DNN applied video and the input video.

In this step, DNN applied video is saved into DNNResults/Videos/ folder, and CSV file containing objective metrics for each video is saved into DNNResults/Metrics/folder.

Adding New DNNs and Videos

MoVİDNN comes with 5 test videos, 2 SR models (ESPCN, EVSRNet), and one deblocking model (DnCNN). It is possible to add additional test videos and DNNs to MoViDNN.

To add a new DNN model, use the quantization script to prepare it for MoViDNN. Once it is done, you can put your model into /MoViDNN/Networks/folder on your mobile device's storage and it will be ready for evaluation. Similarly, if you want to add new test videos, you can simply move them into /MoViDNN/InputVideos/folder in your device storage.

MoViDNN
│
└───Networks
│   │   dncnn_x1.tflite
│   │   espcn_x2.tflite
│   │
│   │  <YourModel>.py
└───InputVideos
│   │   SoccerGame.mp4
│   │   Traffic.mp4
│   │
│   │  <YourVideo>.mp4
..

Subjective Evaluation

MoViDNN can also be used as a subjective test platform to evaluate the DNN applied videos. Once the DNN evaluation is done for a given network and the resulting video is saved, subjective test can be started.

In the first screen, instructions are shown to the tester. Once they are read carefully, the test can be started. Subjective test part of the MoViDNN displays all the selected videos in a random order. After each video, the tester is asked to rate the video quality from 1 to 5.

In the end, ratings are saved into a CSV file which can be used later.

Authors

  • Ekrem Çetinkaya - Christian Doppler Laboratory ATHENA, Alpen-Adria-Universitaet Klagenfurt - [email protected]
  • Minh Nguyen - Christian Doppler Laboratory ATHENA, Alpen-Adria-Universitaet Klagenfurt - [email protected]
Owner
ATHENA Christian Doppler (CD) Laboratory
Adaptive Streaming over HTTP and Emerging Networked Multimedia Services
ATHENA Christian Doppler (CD) Laboratory
For the paper entitled ''A Case Study and Qualitative Analysis of Simple Cross-Lingual Opinion Mining''

Summary This is the source code for the paper "A Case Study and Qualitative Analysis of Simple Cross-Lingual Opinion Mining", which was accepted as fu

1 Nov 10, 2021
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the momen

ChemEngAI 40 Dec 27, 2022
Simple streamlit app to demonstrate HERE Tour Planning

Table of Contents About the Project Built With Getting Started Prerequisites Installation Usage Roadmap Contributing License Acknowledgements About Th

Amol 8 Sep 05, 2022
A Keras implementation of YOLOv3 (Tensorflow backend)

keras-yolo3 Introduction A Keras implementation of YOLOv3 (Tensorflow backend) inspired by allanzelener/YAD2K. Quick Start Download YOLOv3 weights fro

7.1k Jan 03, 2023
Training neural models with structured signals.

Neural Structured Learning in TensorFlow Neural Structured Learning (NSL) is a new learning paradigm to train neural networks by leveraging structured

955 Jan 02, 2023
A machine learning library for spiking neural networks. Supports training with both torch and jax pipelines, and deployment to neuromorphic hardware.

Rockpool Rockpool is a Python package for developing signal processing applications with spiking neural networks. Rockpool allows you to build network

SynSense 21 Dec 14, 2022
Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Paper | Blog OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image gene

OFA Sys 1.4k Jan 08, 2023
Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices, ACM Multimedia 2021

Codes for ECBSR Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices Xindong Zhang, Hui Zeng, Lei Zhang ACM Multimedia 202

xindong zhang 236 Dec 26, 2022
Pytorch Lightning Distributed Accelerators using Ray

Distributed PyTorch Lightning Training on Ray This library adds new PyTorch Lightning accelerators for distributed training using the Ray distributed

166 Dec 27, 2022
dyld_shared_cache processing / Single-Image loading for BinaryNinja

Dyld Shared Cache Parser Author: cynder (kat) Dyld Shared Cache Support for BinaryNinja Without any of the fuss of requiring manually loading several

cynder 76 Dec 28, 2022
PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

halo 368 Dec 06, 2022
Active learning for Mask R-CNN in Detectron2

MaskAL - Active learning for Mask R-CNN in Detectron2 Summary MaskAL is an active learning framework that automatically selects the most-informative i

49 Dec 20, 2022
👨‍💻 run nanosaur in simulation with Gazebo/Ingnition

🦕 👨‍💻 nanosaur_gazebo nanosaur The smallest NVIDIA Jetson dinosaur robot, open-source, fully 3D printable, based on ROS2 & Isaac ROS. Designed & ma

nanosaur 9 Jul 19, 2022
Source code for 2021 ICCV paper "In-the-Wild Single Camera 3D Reconstruction Through Moving Water Surfaces"

In-the-Wild Single Camera 3D Reconstruction Through Moving Water Surfaces This is the PyTorch implementation for 2021 ICCV paper "In-the-Wild Single C

27 Dec 06, 2022
Contains code for the paper "Vision Transformers are Robust Learners".

Vision Transformers are Robust Learners This repository contains the code for the paper Vision Transformers are Robust Learners by Sayak Paul* and Pin

Sayak Paul 103 Jan 05, 2023
Neural Koopman Lyapunov Control

Neural-Koopman-Lyapunov-Control Code for our paper: Neural Koopman Lyapunov Control Requirements dReal4: v4.19.02.1 PyTorch: 1.2.0 The learning framew

Vrushabh Zinage 6 Dec 24, 2022
PyTorch implementation of image classification models for CIFAR-10/CIFAR-100/MNIST/FashionMNIST/Kuzushiji-MNIST/ImageNet

PyTorch Image Classification Following papers are implemented using PyTorch. ResNet (1512.03385) ResNet-preact (1603.05027) WRN (1605.07146) DenseNet

1.2k Jan 04, 2023
Code to reproduce results from the paper "AmbientGAN: Generative models from lossy measurements"

AmbientGAN: Generative models from lossy measurements This repository provides code to reproduce results from the paper AmbientGAN: Generative models

Ashish Bora 87 Oct 19, 2022
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation

CPT This repository contains code and checkpoints for CPT. CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Gener

fastNLP 341 Dec 29, 2022
[Pedestron] Generalizable Pedestrian Detection: The Elephant In The Room. @ CVPR2021

Pedestron Pedestron is a MMdetection based repository, that focuses on the advancement of research on pedestrian detection. We provide a list of detec

Irtiza Hasan 594 Jan 05, 2023