Unicorn can be used for performance analyses of highly configurable systems with causal reasoning

Related tags

Deep Learningunicorn
Overview

Unicorn (EuroSys 2022)

Unicorn can be used for performance analyses of highly configurable systems with causal reasoning. Users or developers can query Unicorn for a performance task.

Overview

overview

Abstract

Modern computer systems are highly configurable, with the total variability space sometimes larger than the number of atoms in the universe. Understanding and reasoning about the performance behavior of highly configurable systems, due to a vast variability space, is challenging. State-of-the-art methods for performance modeling and analyses rely on predictive machine learning models, therefore, they become (i) unreliable in unseen environments (e.g., different hardware, workloads), and (ii) produce incorrect explanations. To this end, we propose a new method, called Unicorn, which (i) captures intricate interactions between configuration options across the software-hardware stack and (ii) describes how such interactions impact performance variations via causal inference. We evaluated Unicorn on six highly configurable systems, including three on-device machine learning systems, a video encoder, a database management system, and a data analytics pipeline. The experimental results indicate that Unicorn outperforms state-of-the-art performance optimization and debugging methods. Furthermore, unlike the existing methods, the learned causal performance models reliably predict performance for new environments.

Pre-requisites

  • python 3.6
  • json
  • pandas
  • numpy
  • flask
  • causalgraphicalmodels
  • causalnex
  • graphviz
  • py-causal
  • causality

Please run the following commands to have your system ready to run Unicorn:

git clone https://github.com/softsys4ai/unicorn.git
cd unicorn
pip install pandas
pip install numpy
pip install flask
pip install causalgraphicalmodels
pip install causalnex
pip install graphviz
pip install py-causal
pip install causality
pip install tensorflow-gpu==1.15
pip install keras
pip install torch==1.4.0 torchvision==0.5.0

How to use Unicorn

Unicorn can be used for performing different tasks such as performance optimization and performance debugging. Unicorn supports both offline and online modes. In the offline mode, Unicorn can be run on any device that uses previously measured configurations. In the online mode, the measurements are performed from NVIDIA Jetson Xavier, NVIDIA Jetson TX2, and NVIDIA Jetson TX1 devices directly. To collect measurements from these devices sudo privilege is required as it requires setting a device to a new configuration before measurement.

Debugging (offline)

Unicorn supports debugging and fixing single-objective and multi-objective performance faults. It also supports root cause analysis of these fixes such as determining accuracy, computing gain etc.

Single-objective debugging

To debug single-objective faults in the offline mode using Unicorn please use the following command:

python unicorn_debugging.py  -o objective -s softwaresystem -k hardwaresystem -m mode

Example

To debug single-objective latency faults for Xception in JETSON TX2 in the offline mode using Unicorn please use the following command:

python unicorn_debugging.py  -o inference_time -s Xception -k TX2 -m offline

To debug single-objective energy faults for Bert in JETSON Xavier in the offline mode using Unicorn please use the following command:

python unicorn_debugging.py  -o total_energy_consumption -s Bert -k Xavier -m offline

Multi-objective debugging

To debug multi-objective faults in the offline mode using Unicorn please use the following command:

python unicorn_debugging.py  -o objective1 -o objective2 -s softwaresystem -k hardwaresystem -m mode

Example

To debug multi-objective latency and energy faults for Deepspeech in JETSON TX2 in the offline mode using Unicorn please use the following command:

python unicorn_debugging.py  -o inference_time -o total_energy_consumption -s Deepspeech  -k TX2 -m offline

Optimization (offline)

Unicorn supports single-objective and multi-objective optimization..

Single-objective optimization

To run single-objective optimization in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o objective -s softwaresystem -k hardwaresystem -m mode

Example

To To run single-objective latency optimization for Xception in JETSON TX2 in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o inference_time -s Xception -k TX2 -m offline

To run single-objective energy optimization for Bert in JETSON Xavier in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o total_energy_consumption -s Bert -k Xavier -m offline

Multi-objective debugging

To run multi-objective optimization in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o objective1 -o objective2 -s softwaresystem -k hardwaresystem -m mode

Example

To run multi-objective latency and energy optimization for Deepspeech in JETSON TX2 in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o inference_time -o total_energy_consumption -s Deepspeech  -k TX2 -m offline

Transferability

Unicorn supports both single and multi-objective transferability. However, multi-objective transferability is not comprehensively investigated in this version. To determine single-objective transferability of Unicorn use the following command:

python unicorn_transferability.py  -o objective -s softwaresystem -k hardwaresystem

Example

To run single-objective latency transferability for Xception in JETSON TX2 in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o inference_time -s Xception -k TX2 -m offline

To run single-objective energy transferability for Bert in JETSON Xavier in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o total_energy_consumption -s Bert -k Xavier -m offline

Data generation

To run experiments on NVIDIA Jetson Xavier, NVIDIA Jetson TX2, and NVIDIA Jetson TX1 devices for a particular software a flask app is required to be launched. Please use the following command to start the app in the localhost.

python run_service.py softwaresystem

For example to initialize a flask app with Xception software system please use:

python run_service.py Image

Once the flask app is running and modelserver is ready then please use the following command to collect performance measurements for different configurations:

python run_params.py softwaresystem

Unicorn usage on a different dataset

To run Unicorn on your a different dataset you will only need unicorn_debugging.py and unicorn_optimization.py. In the online mode, to perform interventions using the recommended configuration you need to develop your own utilities (similar to run_params.py). Additionally, you need to make some changes in the etc/config.yml to use the configuration options and their values accordingly. The necessary steps are the following:

Step 1: Update init_dirin config.yml with the directory where initial data is stored.

Step 2: Update bug_dir in config.yml with the directory where bug data is stored.

Step 3: Update output_dir variable in the config.yml file where you want to save the output data.

Step 4: Update hardware_columns in the config.yml with the hardware configuration options you want to use.

Step 5: Update kernel_columns in the config.yml with the kernel configuration options you want to use.

Step 6: Update perf_columns in the config.yml with the events you want to track using perf. If you use any other monitoring tool you need to update it accordingly.

Step 7: Update measurment_colums in the config.yml based on the performance objectives you want to use for bug resolve.

Step 8: Update is_intervenable variables in the config.yml with the configuration options you want to use and based on your application change their values to True or False. True indicates the configuration options can be intervened upon and vice-versa for False.

Step 9: Update the option_values variables in the config.yml based on the allowable values your option can take.

At this stage you can run unicorn_debugging.py and unicorn_optimization.py with your own specification. Please notice that you also need to update the directories according to your software and hardware name in data directory. If you change the name of the variables in the config file or use a new config fille you need to make changes accordingly from in unicorn_debugging.py and unicorn_optimization.py.

How to cite

If you use Unicorn in your research or the dataset in this repository please cite the following:

@article{iqbalcadet,
  title={CADET: A Systematic Method For Debugging Misconfigurations using Counterfactual Reasoning},
  author={Iqbal, Md Shahriar and Krishna, Rahul and Javidian, Mohammad Ali and Ray, Baishakhi and Jamshidi, Pooyan}
}

Contacts

Please please feel free to contact via email if you find any issues or have any feedbacks. Thank you for using Unicorn.

Name Email
Md Shahriar Iqbal [email protected]

📘   License

Unicorn is released under the under terms of the MIT License.

Comments
  • Evaluation of Source Environments

    Evaluation of Source Environments

    Need to determine the transfer learning pipeline. Determine the following: --- How good is the source modeling? --- How much update is needed? --- Explainability (what are the changes across environments) --- Experiments with different source budgets

    opened by iqbal128855 0
  • Structure Learning

    Structure Learning

    Enrich the causal models with Functional Causal Model (FCM) using CGNN and work with visualization for FCM Update causal model with Causal Interaction model and compare with CGNN. Comparison of CGNN, FCI (entropic calculation), and Causal Interaction model. If we use CGNN need to find the correct strategy - --- how to find the initial skeleton?

    opened by iqbal128855 0
  • Run MLPerf benchmark with Facebook DLRM.

    Run MLPerf benchmark with Facebook DLRM.

    Run MLPerf Benchmark with Facebook DLRM on different hardware (Jetson Xavier and TX2, Possibly on GPU cloud). Change software (RMC1, RMC2, and RMC3) and change workload (single stream, multi-stream and offline, varying number of queries for inference.)

    opened by iqbal128855 0
  • Run MLPerf benchmark with Facebook DLRM.

    Run MLPerf benchmark with Facebook DLRM.

    Run MLPerf Benchmark with Facebook DLRM on different hardware (Jetson Xavier and TX2, Possibly on GPU cloud). Change software (RMC1, RMC2, and RMC3) and change workload (single stream, multi-stream and offline, varying number of queries for inference.)

    opened by iqbal128855 0
  • Run Scalability experiments with Facebook DLRM systems.

    Run Scalability experiments with Facebook DLRM systems.

    --- Performance analysis of the Facebook DLRM systems with different configurations. Show how difficult it is to debug for misconfigurations in real-world production systems and discuss challenges. Discuss the richness in performance landscape (more complex behavior). --- Run CAUPER, BugDoc, SMAC, DeltaDebugging, Encore, and CBI on the DLRM fault dataset and evaluate using the ground truth dataset for both single and multi-objective performance faults. --- Show proof of scalability of CAUPER in Facebook DLRM system with a high number of allowable values taken by different configuration options. --- Write about the evaluation of Facebook DLRM systems. Analyze by 3 slices of latency, energy and heat.

    opened by iqbal128855 0
  • Update the ground truth datasets for each type of performance fault.

    Update the ground truth datasets for each type of performance fault.

    Update ground truth for each fault by using the configurations that provide 80% or more gain and recompute accuracy, precision, and recall with a confidence interval.

    opened by iqbal128855 0
  • Update Causal Structure Learning Algorithm.

    Update Causal Structure Learning Algorithm.

    -- Use FCI with the entropic approach to resolving edges. -- Breakdown computation efforts required for causal structure discovery, computing path causal effects, computing individual treatment effect, and measuring recommended configurations.

    opened by iqbal128855 0
  • More comparisons

    More comparisons

    | Method | Where? | When | link | |---|---|---|---| | ∆LDA | ECML | 2007 | http://pages.cs.wisc.edu/~jerryzhu/ssl/pub/rlda.pdf| |SmartConf | ASPLOS | 2018 | https://people.cs.uchicago.edu/~hankhoffmann/autoconf.pdf | | BestConfig | SoCC | 2017 | https://arxiv.org/pdf/1710.03439.pdf | | LEO | SIGARCH | 2015 | https://dl.acm.org/doi/pdf/10.1145/2786763.2694373 |

    opened by onkfotocer 0
  • Real world case study with a self-driving car system composition

    Real world case study with a self-driving car system composition

    Use Fig. 3 from here: https://www.bdti.com/InsideDSP/2017/03/14/NVIDIA to explain a real world scenario https://forums.developer.nvidia.com/t/cuda-performance-issue-on-tx2/50477 to show it works

    opened by onkfotocer 0
  • Policies for handing edge-type mismatches

    Policies for handing edge-type mismatches

    When are the policies applied?

    • bi-directed & no-edge → we get a confidence score- whichever edge direction has the highest confidence use that direction.
    • Un-directed edge & no-edge → no edge
    • Tail has a bubble and head has arrow → keep the directed edge and remove the bubble
    • No-edge & edge → edge
    • No-edge & no-edge → no-edge

    When are the policies applied?

    Bubble/un-directed edge - selection variables Bi-directed edge - hidden variables

    When are the policies applied?

    1. Case 1: Greedy-- apply the above rules at every step
      • At each iteration there is a DAG (say DAG_t, DAG_t-1, ...)
      • If there are conflicts keep the counts of how many times an edge a->b, b->a, a--/--b, appears, use the one that the max count.
    2. Case 2: Apply in the end.
    Experiment 
    opened by rahlk 0
  • How to resolve bi-directed edges and cycles in the causal graph?

    How to resolve bi-directed edges and cycles in the causal graph?

    • [ ] Randomly -- not an appropriate answer for the reviewer
    • [ ] Use FCI/FGS/PC (besides expert knowledge) which makes much looser assumptions about causal sufficiency to inform NOTEARS
    opened by rahlk 0
Releases(EuroSys2022)
Owner
AISys Lab
Artificial Intelligence and Systems Laboratory
AISys Lab
Spherical CNNs

Spherical CNNs Equivariant CNNs for the sphere and SO(3) implemented in PyTorch Overview This library contains a PyTorch implementation of the rotatio

Jonas Köhler 893 Dec 28, 2022
Code for the paper "Adversarial Generator-Encoder Networks"

This repository contains code for the paper "Adversarial Generator-Encoder Networks" (AAAI'18) by Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky. Pr

Dmitry Ulyanov 279 Jun 26, 2022
Code for "Discovering Non-monotonic Autoregressive Orderings with Variational Inference" (paper and code updated from ICLR 2021)

Discovering Non-monotonic Autoregressive Orderings with Variational Inference Description This package contains the source code implementation of the

Xuanlin (Simon) Li 10 Dec 29, 2022
Contains supplementary materials for reproduce results in HMC divergence time estimation manuscript

Scalable Bayesian divergence time estimation with ratio transformations This repository contains the instructions and files to reproduce the analyses

Suchard Research Group 1 Sep 21, 2022
A general framework for deep learning experiments under PyTorch based on pytorch-lightning

torchx Torchx is a general framework for deep learning experiments under PyTorch based on pytorch-lightning. TODO list gan-like training wrapper text

Yingtian Liu 6 Mar 17, 2022
B-cos Networks: Attention is All we Need for Interpretability

Convolutional Dynamic Alignment Networks for Interpretable Classifications M. Böhle, M. Fritz, B. Schiele. B-cos Networks: Alignment is All we Need fo

58 Dec 23, 2022
scAR (single-cell Ambient Remover) is a package for data denoising in single-cell omics.

scAR scAR (single cell Ambient Remover) is a package for denoising multiple single cell omics data. It can be used for multiple tasks, such as, sgRNA

19 Nov 28, 2022
ColBERT: Contextualized Late Interaction over BERT (SIGIR'20)

Update: if you're looking for ColBERTv2 code, you can find it alongside a new simpler API, in the branch new_api. ColBERT ColBERT is a fast and accura

Stanford Future Data Systems 637 Jan 08, 2023
Airbus Ship Detection Challenge

Airbus Ship Detection Challenge This is an open solution to the Airbus Ship Detection Challenge. Our goals We are building entirely open solution to t

minerva.ml 55 Nov 29, 2022
Auto HMM: Automatic Discrete and Continous HMM including Model selection

Auto HMM: Automatic Discrete and Continous HMM including Model selection

Chess_champion 29 Dec 07, 2022
PyTorch implementation of Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy

Anomaly Transformer in PyTorch This is an implementation of Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy. This pape

spencerbraun 160 Dec 19, 2022
Official code for the paper: Deep Graph Matching under Quadratic Constraint (CVPR 2021)

QC-DGM This is the official PyTorch implementation and models for our CVPR 2021 paper: Deep Graph Matching under Quadratic Constraint. It also contain

Quankai Gao 55 Nov 14, 2022
Dados coletados e programas desenvolvidos no processo de iniciação científica

Iniciacao_cientifica_FAPESP_2020-14845-6 Dados coletados e programas desenvolvidos no processo de iniciação científica Os arquivos .py são os programa

1 Jan 10, 2022
Hierarchical Memory Matching Network for Video Object Segmentation (ICCV 2021)

Hierarchical Memory Matching Network for Video Object Segmentation Hongje Seong, Seoung Wug Oh, Joon-Young Lee, Seongwon Lee, Suhyeon Lee, Euntai Kim

Hongje Seong 72 Dec 14, 2022
Efficient 3D human pose estimation in video using 2D keypoint trajectories

3D human pose estimation in video with temporal convolutions and semi-supervised training This is the implementation of the approach described in the

Meta Research 3.1k Dec 29, 2022
Source code for PairNorm (ICLR 2020)

PairNorm Official pytorch source code for PairNorm paper (ICLR 2020) This code requires pytorch_geometric=1.3.2 usage For SGC, we use original PairNo

62 Dec 08, 2022
Code for: Gradient-based Hierarchical Clustering using Continuous Representations of Trees in Hyperbolic Space. Nicholas Monath, Manzil Zaheer, Daniel Silva, Andrew McCallum, Amr Ahmed. KDD 2019.

gHHC Code for: Gradient-based Hierarchical Clustering using Continuous Representations of Trees in Hyperbolic Space. Nicholas Monath, Manzil Zaheer, D

Nicholas Monath 35 Nov 16, 2022
Code release for SLIP Self-supervision meets Language-Image Pre-training

SLIP: Self-supervision meets Language-Image Pre-training What you can find in this repo: Pre-trained models (with ViT-Small, Base, Large) and code to

Meta Research 621 Dec 31, 2022
Implementation of the famous Image Manipulation\Forgery Detector "ManTraNet" in Pytorch

Who has never met a forged picture on the web ? No one ! Everyday we are constantly facing fake pictures touched up in Photoshop but it is not always

Rony Abecidan 77 Dec 16, 2022
Node Editor Plug for Blender

NodeEditor Blender的程序化建模插件 Show Current 基本框架:自定义的tree-node-socket、tree中的node与socket采用字典查询、基于socket入度的拓扑排序 数据传递和处理依靠Tree中的字典,socket传递字典key TODO 增加更多的节点

Cuimi 11 Dec 03, 2022