Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

Overview

scc4onnx

Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

https://github.com/PINTO0309/simple-onnx-processing-tools

Downloads GitHub PyPI CodeQL

Key concept

  • Allow the user to specify the name of the input OP to change the input order.
  • All number of dimensions can be freely changed, not only 4 dimensions such as NCHW and NHWC.
  • Simply rewrite the input order of the input OP to the specified order and extrapolate Transpose after the input OP so that it does not affect the processing of subsequent OPs.
  • Allows the user to change the channel order of RGB and BGR by specifying options.

1. Setup

1-1. HostPC

### option
$ echo export PATH="~/.local/bin:$PATH" >> ~/.bashrc \
&& source ~/.bashrc

### run
$ pip install -U onnx \
&& python3 -m pip install -U onnx_graphsurgeon --index-url https://pypi.ngc.nvidia.com \
&& pip install -U scc4onnx

1-2. Docker

### docker pull
$ docker pull pinto0309/scc4onnx:latest

### docker build
$ docker build -t pinto0309/scc4onnx:latest .

### docker run
$ docker run --rm -it -v `pwd`:/workdir pinto0309/scc4onnx:latest
$ cd /workdir

2. CLI Usage

$ scc4onnx -h

usage:
  scc4onnx [-h]
  --input_onnx_file_path INPUT_ONNX_FILE_PATH
  --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
  [--input_op_names_and_order_dims INPUT_OP_NAME ORDER_DIM]
  [--channel_change_inputs INPUT_OP_NAME DIM]
  [--non_verbose]

optional arguments:
  -h, --help
      show this help message and exit

  --input_onnx_file_path INPUT_ONNX_FILE_PATH
      Input onnx file path.

  --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
      Output onnx file path.

  --input_op_names_and_order_dims INPUT_OP_NAME ORDER_DIM
      Specify the name of the input_op to be dimensionally changed and the order of the
      dimensions after the change.
      The name of the input_op to be dimensionally changed can be specified multiple times.

      e.g.
      --input_op_names_and_order_dims aaa [0,3,1,2] \
      --input_op_names_and_order_dims bbb [0,2,3,1] \
      --input_op_names_and_order_dims ccc [0,3,1,2,4,5]

  --channel_change_inputs INPUT_OP_NAME DIM
      Change the channel order of RGB and BGR.
      If the original model is RGB, it is transposed to BGR.
      If the original model is BGR, it is transposed to RGB.
      It can be selectively specified from among the OP names specified
      in --input_op_names_and_order_dims.
      OP names not specified in --input_op_names_and_order_dims are ignored.
      Multiple times can be specified as many times as the number of OP names specified
      in --input_op_names_and_order_dims.
      --channel_change_inputs op_name dimension_number_representing_the_channel
      dimension_number_representing_the_channel must specify the dimension position before
      the change in input_op_names_and_order_dims.
      For example, dimension_number_representing_the_channel is 1 for NCHW and 3 for NHWC.

      e.g.
      --channel_change_inputs aaa 3 \
      --channel_change_inputs bbb 1 \
      --channel_change_inputs ccc 5

  --non_verbose
      Do not show all information logs. Only error logs are displayed.

3. In-script Usage

$ python
>>> from scc4onnx import order_conversion
>>> help(order_conversion)
Help on function order_conversion in module scc4onnx.onnx_input_order_converter:

order_conversion(
  input_op_names_and_order_dims: Union[dict, NoneType] = None,
  channel_change_inputs: Union[dict, NoneType] = None,
  input_onnx_file_path: Union[str, NoneType] = '',
  output_onnx_file_path: Union[str, NoneType] = '',
  onnx_graph: Union[onnx.onnx_ml_pb2.ModelProto, NoneType] = None,
  non_verbose: Union[bool, NoneType] = False
) -> onnx.onnx_ml_pb2.ModelProto

    Parameters
    ----------
    input_onnx_file_path: Optional[str]
        Input onnx file path.
        Either input_onnx_file_path or onnx_graph must be specified.
    
    output_onnx_file_path: Optional[str]
        Output onnx file path.
        If output_onnx_file_path is not specified, no .onnx file is output.
    
    onnx_graph: Optional[onnx.ModelProto]
        onnx.ModelProto.
        Either input_onnx_file_path or onnx_graph must be specified.
        onnx_graph If specified, ignore input_onnx_file_path and process onnx_graph.
    
    input_op_names_and_order_dims: Optional[dict]
        Specify the name of the input_op to be dimensionally changed and
        the order of the dimensions after the change.
        The name of the input_op to be dimensionally changed
        can be specified multiple times.
    
        e.g.
        input_op_names_and_order_dims = {
            "input_op_name1": [0,3,1,2],
            "input_op_name2": [0,2,3,1],
            "input_op_name3": [0,3,1,2,4,5],
        }
    
    channel_change_inputs: Optional[dict]
        Change the channel order of RGB and BGR.
        If the original model is RGB, it is transposed to BGR.
        If the original model is BGR, it is transposed to RGB.
        It can be selectively specified from among the OP names
        specified in input_op_names_and_order_dims.
        OP names not specified in input_op_names_and_order_dims are ignored.
        Multiple times can be specified as many times as the number
        of OP names specified in input_op_names_and_order_dims.
        channel_change_inputs = {"op_name": dimension_number_representing_the_channel}
        dimension_number_representing_the_channel must specify
        the dimension position after the change in input_op_names_and_order_dims.
        For example, dimension_number_representing_the_channel is 1 for NCHW and 3 for NHWC.
    
        e.g.
        channel_change_inputs = {
            "aaa": 1,
            "bbb": 3,
            "ccc": 2,
        }
    
    non_verbose: Optional[bool]
        Do not show all information logs. Only error logs are displayed.
        Default: False
    
    Returns
    -------
    order_converted_graph: onnx.ModelProto
        Order converted onnx ModelProto

4. CLI Execution

$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--input_op_names_and_order_dims left [0,2,3,1] \
--input_op_names_and_order_dims right [0,2,3,1] \
--channel_change_inputs left 1 \
--channel_change_inputs right 1

5. In-script Execution

from scc4onnx import order_conversion

order_converted_graph = order_conversion(
    onnx_graph=graph,
    input_op_names_and_order_dims={"left": [0,2,3,1], "right": [0,2,3,1]},
    channel_change_inputs={"left": 1, "right": 1},
    non_verbose=True,
)

6. Sample

6-1. Transpose only

image

$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--input_op_names_and_order_dims left [0,2,3,1] \
--input_op_names_and_order_dims right [0,2,3,1]

image image

6-2. Transpose + RGB<->BGR

image

$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--input_op_names_and_order_dims left [0,2,3,1] \
--input_op_names_and_order_dims right [0,2,3,1] \
--channel_change_inputs left 1 \
--channel_change_inputs right 1

image

6-3. RGB<->BGR only

image

$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--channel_change_inputs left 1 \
--channel_change_inputs right 1

image

7. Issues

https://github.com/PINTO0309/simple-onnx-processing-tools/issues

You might also like...
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX

ONNX msg_chn_wacv20 depth completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20 model in

A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for prediction.

Predicitng_viability Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for

Python project to take sound as input and output as RGB + Brightness values suitable for DMX

sound-to-light Python project to take sound as input and output as RGB + Brightness values suitable for DMX Current goals: Get one pixel working: Vary

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by

An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS.

ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS. It currently supports four examples for you to quickly experience the power of ONNX Runtime Web.

A repository that shares tuning results of trained models generated by TensorFlow / Keras. Post-training quantization (Weight Quantization, Integer Quantization, Full Integer Quantization, Float16 Quantization), Quantization-aware training. TensorFlow Lite. OpenVINO. CoreML. TensorFlow.js. TF-TRT. MediaPipe. ONNX. [.tflite,.h5,.pb,saved_model,tfjs,tftrt,mlmodel,.xml/.bin, .onnx] ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX
ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

Releases(1.0.5)
  • 1.0.5(Sep 9, 2022)

    • Add short form parameter
      $ scc4onnx -h
      
      usage:
        scc4onnx [-h]
        -if INPUT_ONNX_FILE_PATH
        -of OUTPUT_ONNX_FILE_PATH
        [-ioo INPUT_OP_NAME ORDER_DIM]
        [-cci INPUT_OP_NAME DIM]
        [-n]
      
      optional arguments:
        -h, --help
            show this help message and exit
      
        -if INPUT_ONNX_FILE_PATH, --input_onnx_file_path INPUT_ONNX_FILE_PATH
            Input onnx file path.
      
        -of OUTPUT_ONNX_FILE_PATH, --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
            Output onnx file path.
      
        -ioo INPUT_OP_NAMES_AND_ORDER_DIMS INPUT_OP_NAMES_AND_ORDER_DIMS, --input_op_names_and_order_dims INPUT_OP_NAMES_AND_ORDER_DIMS INPUT_OP_NAMES_AND_ORDER_DIMS
            Specify the name of the input_op to be dimensionally changed and the order of the
            dimensions after the change.
            The name of the input_op to be dimensionally changed can be specified multiple times.
      
            e.g.
            --input_op_names_and_order_dims aaa [0,3,1,2] \
            --input_op_names_and_order_dims bbb [0,2,3,1] \
            --input_op_names_and_order_dims ccc [0,3,1,2,4,5]
      
        -cci CHANNEL_CHANGE_INPUTS CHANNEL_CHANGE_INPUTS, --channel_change_inputs CHANNEL_CHANGE_INPUTS CHANNEL_CHANGE_INPUTS
            Change the channel order of RGB and BGR.
            If the original model is RGB, it is transposed to BGR.
            If the original model is BGR, it is transposed to RGB.
            It can be selectively specified from among the OP names specified
            in --input_op_names_and_order_dims.
            OP names not specified in --input_op_names_and_order_dims are ignored.
            Multiple times can be specified as many times as the number of OP names specified
            in --input_op_names_and_order_dims.
            --channel_change_inputs op_name dimension_number_representing_the_channel
            dimension_number_representing_the_channel must specify the dimension position before
            the change in input_op_names_and_order_dims.
            For example, dimension_number_representing_the_channel is 1 for NCHW and 3 for NHWC.
      
            e.g.
            --channel_change_inputs aaa 3 \
            --channel_change_inputs bbb 1 \
            --channel_change_inputs ccc 5
      
        -n, --non_verbose
            Do not show all information logs. Only error logs are displayed.
      

    Full Changelog: https://github.com/PINTO0309/scc4onnx/compare/1.0.4...1.0.5

    Source code(tar.gz)
    Source code(zip)
  • 1.0.4(May 25, 2022)

  • 1.0.3(May 15, 2022)

  • 1.0.2(May 10, 2022)

  • 1.0.1(Apr 19, 2022)

  • 1.0.0(Apr 18, 2022)

Owner
Katsuya Hyodo
Hobby programmer. Intel Software Innovator Program member.
Katsuya Hyodo
Code for the preprint "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"

This is a repository for the paper of "Well-classified Examples are Underestimated in Classification with Deep Neural Networks" The implementation and

LancoPKU 25 Dec 11, 2022
Official codebase used to develop Vision Transformer, MLP-Mixer, LiT and more.

Big Vision This codebase is designed for training large-scale vision models on Cloud TPU VMs. It is based on Jax/Flax libraries, and uses tf.data and

Google Research 701 Jan 03, 2023
Wordle-solver - Wordle answer generation program in python

🟨 Wordle Solver 🟩 Wordle answer generation program in python ✔️ Requirements U

Dahyun Kang 4 May 28, 2022
Breast cancer is been classified into benign tumour and malignant tumour.

Breast cancer is been classified into benign tumour and malignant tumour. Logistic regression is applied in this model.

1 Feb 04, 2022
An open framework for Federated Learning.

Welcome to Intel® Open Federated Learning Federated learning is a distributed machine learning approach that enables organizations to collaborate on m

Intel Corporation 397 Dec 27, 2022
Experimental solutions to selected exercises from the book [Advances in Financial Machine Learning by Marcos Lopez De Prado]

Advances in Financial Machine Learning Exercises Experimental solutions to selected exercises from the book Advances in Financial Machine Learning by

Brian 1.4k Jan 04, 2023
Merlion: A Machine Learning Framework for Time Series Intelligence

Merlion: A Machine Learning Library for Time Series Table of Contents Introduction Installation Documentation Getting Started Anomaly Detection Foreca

Salesforce 2.8k Dec 30, 2022
Kernel Point Convolutions

Created by Hugues THOMAS Introduction Update 27/04/2020: New PyTorch implementation available. With SemanticKitti, and Windows supported. This reposit

Hugues THOMAS 584 Jan 07, 2023
HiFT: Hierarchical Feature Transformer for Aerial Tracking (ICCV2021)

HiFT: Hierarchical Feature Transformer for Aerial Tracking Ziang Cao, Changhong Fu, Junjie Ye, Bowen Li, and Yiming Li Our paper is Accepted by ICCV 2

Intelligent Vision for Robotics in Complex Environment 55 Nov 23, 2022
This repository contains the code used for Predicting Patient Outcomes with Graph Representation Learning (https://arxiv.org/abs/2101.03940).

Predicting Patient Outcomes with Graph Representation Learning This repository contains the code used for Predicting Patient Outcomes with Graph Repre

Emma Rocheteau 76 Dec 22, 2022
Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting

Autoformer (NeurIPS 2021) Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting Time series forecasting is a c

THUML @ Tsinghua University 847 Jan 08, 2023
A Pytorch implementation of MoveNet from Google. Include training code and pre-train model.

Movenet.Pytorch Intro MoveNet is an ultra fast and accurate model that detects 17 keypoints of a body. This is A Pytorch implementation of MoveNet fro

Mr.Fire 241 Dec 26, 2022
Scalable Optical Flow-based Image Montaging and Alignment

SOFIMA SOFIMA (Scalable Optical Flow-based Image Montaging and Alignment) is a tool for stitching, aligning and warping large 2d, 3d and 4d microscopy

Google Research 16 Dec 21, 2022
CLIP (Contrastive Language–Image Pre-training) for Italian

Italian CLIP CLIP (Radford et al., 2021) is a multimodal model that can learn to represent images and text jointly in the same space. In this project,

Italian CLIP 114 Dec 29, 2022
Official Code Release for "CLIP-Adapter: Better Vision-Language Models with Feature Adapters"

Official Code Release for "CLIP-Adapter: Better Vision-Language Models with Feature Adapters" Pipeline of CLIP-Adapter CLIP-Adapter is a drop-in modul

peng gao 157 Dec 26, 2022
A Self-Supervised Contrastive Learning Framework for Aspect Detection

AspDecSSCL A Self-Supervised Contrastive Learning Framework for Aspect Detection This repository is a pytorch implementation for the following AAAI'21

Tian Shi 30 Dec 28, 2022
This library is a location of the LegacyLogger for PyTorch Lightning.

neptune-contrib Documentation See neptune-contrib documentation site Installation Get prerequisites python versions 3.5.6/3.6 are supported Install li

neptune.ai 26 Oct 07, 2021
Reaction SMILES-AA mapping via language modelling

rxn-aa-mapper Reactions SMILES-AA sequence mapping setup conda env create -f conda.yml conda activate rxn_aa_mapper In the following we consider on ex

16 Dec 13, 2022
[CVPR'21] DeepSurfels: Learning Online Appearance Fusion

DeepSurfels: Learning Online Appearance Fusion Paper | Video | Project Page This is the official implementation of the CVPR 2021 submission DeepSurfel

Online Reconstruction 52 Nov 14, 2022
Deep-learning-roadmap - All You Need to Know About Deep Learning - A kick-starter

Deep Learning - All You Need to Know Sponsorship To support maintaining and upgrading this project, please kindly consider Sponsoring the project deve

Instill AI 4.4k Dec 26, 2022