Artifacts for paper "MMO: Meta Multi-Objectivization for Software Configuration Tuning"

Related tags

Deep Learningmmo
Overview

MMO: Meta Multi-Objectivization for Software Configuration Tuning

This repository contains the data and code for the following paper that is currently submitting for publication:

Tao Chen and Miqing Li. MMO: Meta Multi-Objectivization for Software Configuration Tuning.

Introduction

In software configuration tuning, different optimizers have been designed to optimize a single performance objective (e.g.,minimizing latency), yet there is still little success in preventing (or mitigating) the search from being trapped in local optima — a hard nut to crack due to the complex configuration landscape and expensive measurement. To tackle this challenge, in this paper, we take a different perspective. Instead of focusing on improving the optimizer, we work on the level of optimization model and propose a meta multi-objectivization (MMO) model that considers an auxiliary performance objective (e.g., throughput in addition to latency). What makes this model unique is that we do not optimize the auxiliary performance objective, but rather use it to make similarly-performing while different configurations less comparable (i.e. Pareto nondominated to each other), thus preventing the search from being trapped in local optima. Importantly, we show how to effectively use the MMO model without worrying about its weight — the only yet highly sensitive parameter that can determine its effectiveness. This is achieved by designing a new normalization method that allows an optimizer to adaptively find the right objective bounds when guiding the tuning. Experiments on 22 cases from 11 real-world software systems/environments confirm that our MMO model with the new normalization performs better than its state-of-the-art single-objective counterparts on 18 out of 22 cases while achieving up to 2.09x speedup. For 15 cases, the new normalization also enables the MMO model to outperform the instance when using it with the normalization proposed in our prior FSE work under pre-tuned best weights, saving a great amount of resources which would be otherwise necessary to find a good weight. We also demonstrate that the MMO model with the new normalization can consolidate FLASH, a recent model-based tuning tool, on 15 out of 22 cases with 1.22x speedup in general.

Data Result

The dataset of this work can be accessed via the Zenodo link here. In particular, the zip file contains all the raw data as reported in the paper; most of the structures are self-explained but we wish to highlight the following:

  • The data under the folder 1.0-0.0 and 0.0-1.0 are for the single-objective optimizers. The former uses O1 as the target performance objective while the latter uses O2 as the target. The data under other folders named by the subject systems are for the MMO and PMO. The result under the weight folder 1.0 are for MMO while all other folders represent different weight values, containing the data for MMO-FSE.

  • For those data of MMO, MMO-FSE, and PMO, the folder 0 and 1 denote using uses O1 and O2 as the target performance objective, respectively.

  • In the lowest-level folder where the data is stored (i.e., the sas folder), SolutionSet.rtf contains the results over all repeated runs; SolutionSetWithMeasurement.rtf records the results over different numbers of measurements.

Souce Code

The code folder contains all the information about the source code, as well as an executable jar file in the executable folder .

Running the Experiments

To run the experiments, one can download the mmo-experiments.jar from the aforementioned repository (under the executable folder). Since the artifacts were written in Java, we assume that the JDK/JRE has already been installed. Next, one can run the code using java -jar mmo-experiments.jar [subject] [runs], where [subject] and [runs] denote the subject software system and the number of repeated run (this is an integer and 50 is the default if it is not specified), respectively. The keyword for the systems/environments used in the paper are:

  • trimesh
  • x264
  • storm-wc
  • storm-rs
  • dnn-sa
  • dnn-adiac
  • mariadb
  • vp9
  • mongodb
  • lrzip
  • llvm

For example, running java -jar mmo-experiments.jar trimesh would execute experiments on the trimesh software for 50 repeated runs.

For each software system, the experiment consists of the runs for MMO, MMO-FSE with all weight values, PMO and the four state-of-the-art single-objective optimizers, as well as the FLASH and FLASH_MMO. All the outputs would be stored in the results folder at the same directory as the executable jar file.

All the measurement data of the subject configurable systems have been placed inside the mmo-experiments.jar.

Code for the paper Open Sesame: Getting Inside BERT's Linguistic Knowledge.

Open Sesame This repository contains the code for the paper Open Sesame: Getting Inside BERT's Linguistic Knowledge. Credits We built the project on t

9 Jul 24, 2022
【CVPR 2021, Variational Inference Framework, PyTorch】 From Rain Generation to Rain Removal

From Rain Generation to Rain Removal (CVPR2021) Hong Wang, Zongsheng Yue, Qi Xie, Qian Zhao, Yefeng Zheng, and Deyu Meng [PDF&&Supplementary Material]

Hong Wang 48 Nov 23, 2022
Progressive Growing of GANs for Improved Quality, Stability, and Variation

Progressive Growing of GANs for Improved Quality, Stability, and Variation — Official TensorFlow implementation of the ICLR 2018 paper Tero Karras (NV

Tero Karras 5.9k Jan 05, 2023
Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification

Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification This repository is the official implementation of [Dealing With Misspeci

0 Oct 25, 2021
Companion code for "Bayesian logistic regression for online recalibration and revision of risk prediction models with performance guarantees"

Companion code for "Bayesian logistic regression for online recalibration and revision of risk prediction models with performance guarantees" Installa

0 Oct 13, 2021
Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper

Ponder(ing) Transformer Implementation of a Transformer that learns to adapt the number of computational steps it takes depending on the difficulty of

Phil Wang 65 Oct 04, 2022
Dynamic hair modeling from monocular videos using deep neural networks

Dynamic Hair Modeling The source code of the networks for our paper "Dynamic hair modeling from monocular videos using deep neural networks" (SIGGRAPH

53 Oct 18, 2022
Permeability Prediction Via Multi Scale 3D CNN

Permeability-Prediction-Via-Multi-Scale-3D-CNN Data: The raw CT rock cores are obtained from the Imperial Colloge portal. The CT rock cores are sub-sa

Mohamed Elmorsy 2 Jul 06, 2022
social humanoid robots with GPGPU and IoT

Social humanoid robots with GPGPU and IoT Social humanoid robots with GPGPU and IoT Paper Authors Mohsen Jafarzadeh, Stephen Brooks, Shimeng Yu, Balak

0 Jan 07, 2022
Latent Execution for Neural Program Synthesis

Latent Execution for Neural Program Synthesis This repo provides the code to replicate the experiments in the paper Xinyun Chen, Dawn Song, Yuandong T

Xinyun Chen 16 Oct 02, 2022
[TPAMI 2021] iOD: Incremental Object Detection via Meta-Learning

Incremental Object Detection via Meta-Learning To appear in an upcoming issue of the IEEE Transactions on Pattern Analysis and Machine Intelligence (T

Joseph K J 66 Jan 04, 2023
RIM: Reliable Influence-based Active Learning on Graphs.

RIM: Reliable Influence-based Active Learning on Graphs. This repository is the official implementation of RIM. Requirements To install requirements:

Wentao Zhang 4 Aug 29, 2022
RL agent to play μRTS with Stable-Baselines3

Gym-μRTS with Stable-Baselines3/PyTorch This repo contains an attempt to reproduce Gridnet PPO with invalid action masking algorithm to play μRTS usin

Oleksii Kachaiev 24 Nov 11, 2022
A simple baseline for the 2022 IEEE GRSS Data Fusion Contest (DFC2022)

DFC2022 Baseline A simple baseline for the 2022 IEEE GRSS Data Fusion Contest (DFC2022) This repository uses TorchGeo, PyTorch Lightning, and Segmenta

isaac 24 Nov 28, 2022
Matplotlib Image labeller for classifying images

mpl-image-labeller Use Matplotlib to label images for classification. Works anywhere Matplotlib does - from the notebook to a standalone gui! For more

Ian Hunt-Isaak 5 Sep 24, 2022
TensorFlow Implementation of Unsupervised Cross-Domain Image Generation

Domain Transfer Network (DTN) TensorFlow implementation of Unsupervised Cross-Domain Image Generation. Requirements Python 2.7 TensorFlow 0.12 Pickle

Yunjey Choi 864 Dec 30, 2022
ISBI 2022: Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image.

Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image Introduction This repository contains the PyTorch implem

25 Nov 09, 2022
Neural style transfer as a class in PyTorch

pt-styletransfer Neural style transfer as a class in PyTorch Based on: https://github.com/alexis-jacq/Pytorch-Tutorials Adds: StyleTransferNet as a cl

Tyler Kvochick 31 Jun 27, 2022
An Approach to Explore Logistic Regression Models

User-centered Regression An Approach to Explore Logistic Regression Models This tool applies the potential of Attribute-RadViz in identifying correlat

0 Nov 12, 2021
Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022.

Jadena Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022. arXiv

Qing Guo 13 Nov 29, 2022