Automated Hyperparameter Optimization Competition

Overview

QQ浏览器2021AI算法大赛 - 自动超参数优化竞赛

ACM CIKM 2021 AnalyticCup

在信息流推荐业务场景中普遍存在模型或策略效果依赖于“超参数”的问题,而“超参数"的设定往往依赖人工经验调参,不仅效率低下维护成本高,而且难以实现更优效果。因此,本次赛题以超参数优化为主题,从真实业务场景问题出发,并基于脱敏后的数据集来评测各个参赛队伍的超参数优化算法。本赛题为超参数优化问题或黑盒优化问题:给定超参数的取值空间,每一轮可以获取一组超参数对应的Reward,要求超参数优化算法在限定的迭代轮次内找到Reward尽可能大的一组超参数,最终按照找到的最大Reward来计算排名。

1. 重要资源

2.代码结构

|--example_random_searcher  随机算法代码提交示例
|  `--searcher.py
|
|--example_bayesian_optimization 贝叶斯优化算法提交示例
|  |--requirements.txt     提交附加程序包示例
|  `--searcher.py
|
|--input                   测试评估函数数据
|  |--data-2
|  `--data-30
|
|--thpo                    thpo比赛工具包
|  |--__init__.py
|  |--abstract_searcher.py
|  |--common.py
|  |--evaluate_function.py
|  |--reward_calculation.py
|  |--run_search_one_time.py
|  `--run_search.py
|
|--main.py                 测试主程序文件
|--local_test.sh           本地测试脚本
|--prepare_submission.sh   提交代码前打包脚本
|--environments.txt        评测环境已经安装的包
`--requirements.txt        demo程序依赖的包环境

3. 快速入门

3.1 环境搭建

THPO-Kit程序工具包使用python3编写,程序依赖包在requirements.txt中,需要安装依赖包才能执行,使用pip3安装依赖包:

pip3 install -r requirements.txt

3.2 算法创建

  1. 参照 example_randon_searcher,新建一个自己算法的目录my_algo
  2. my_algo目录下新建searcher.py文件
  3. searcher.py文件里实现自己的Searcher类(文件名和类名不允许自定义)
  4. 实现 __init__suggest 函数
  5. 修改 local_test.sh,将SEARCHER修改为my_algo
  6. 执行 local_test.sh 脚本,将得到算法的执行结果

Step 1 - Step 2:[root folder]

|--my_algo
|  |--requirements.txt
|  `--searcher.py 
|--local_test.sh

Step 3 - Step 4:[searcher.py]

# 必须引入searcher抽象类,必不可少
from thpo.abstract_searcher import AbstractSearcher
from random import randint

class Searcher(AbstractSearcher):
    searcher_name = "RandomSearcher"

    def __init__(self, parameters_config, n_iter, n_suggestion):
        AbstractSearcher.__init__(self, 
                                  parameters_config, 
                                  n_iter,
                                  n_suggestion)

    def suggest(self, suggestion_history, n_suggestions=1):
        next_suggestions = []
        for i in range(n_suggestions):
            next_suggest = {
                name: 
                conf["coords"][randint(0,len(conf["coords"])-1)]
                for name, conf in self.parameters_config.items()
            }
            next_suggestions.append(next_suggest)
        return next_suggestions

Step 5:[local_test.sh]

SEARCHER="my_algo"

3.3 本地运行

执行脚本local_test.sh进行本地评测

./local_test.sh

执行结果:

====================== run search result ========================
 err_code:  0  err_msg:  
========================= iters means ===========================
func: data-2 iteration best: [25.24271821 26.36435157 12.77928619 10.19180929 11.3147711  10.17430656
 12.77928619 27.79752169 26.36793589 11.12007615]
func: data-30 iteration best: [-0.95264345 -0.27725879 -0.36873091 -0.68088963 -0.28840479 -0.50006427
 -0.32088949 -0.78627201 -0.53204227 -0.98427191]
========================= fianl score ============================
example_bayesian_optimization final score:  0.47173337831255463
==================================================================

3.4 提交比赛代码

使用prepare_submission.sh 脚本打包,提交打包后的searcher程序包到比赛代码提交入口

./prepare_submission.sh example_random_searcher

执行结果:

upload_example_random_searcher_08131917
  adding: requirements.txt (stored 0%)
  adding: searcher.py (deflated 66%)
----------------------------------------------------------------
Built achive for upload
Archive:  ./upload_example_random_searcher_08131917.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
        0  08-13-2021 19:17   requirements.txt
     3767  08-13-2021 19:17   searcher.py
---------                     -------
     3767                     2 files
For scoring, upload upload_example_random_searcher_08131917.zip at address:
https://algo.browser.qq.com/


QQ Browser 2021 AI Algorithm Competiton - Automated Hyperparameter Optimization Contest

ACM CIKM 2021 AnalyticCup

The choices of hyperparameters have critical effects on models or strategies in recommendation systems. But the hyperparameters are mostly chosen based on experience, which brings high maintenance costs and sub-optimal results. Thus, this track aims at automated hyperparameters optimization based on anonymized realistic industrial tasks and datasets. Given the space of all possible hyperparameters' values, a reward could be achieved with a set of hyperparameters in each iteration. The participants are asked to maximize the reward within a given limit of iterations with a hyperparameters optimization algorithm. The final rank of the participants will be the rank of their maximum reward.

1.Resource

2.Repo structure

|--example_random_searcher   	    # example of random search
|  `--searcher.py
|
|--example_bayesian_optimization    # example of bayesian optimization
|  |--requirements.txt              # extra paackge requirement
|  `--searcher.py
|
|--input                            # testcases
|  |--data-2
|  `--data-30
|
|--thpo                             # thpo-kit
|  |--__init__.py
|  |--abstract_searcher.py
|  |--common.py
|  |--evaluate_function.py
|  |--reward_calculation.py
|  |--run_search_one_time.py
|  `--run_search.py
|
|--main.py                          # main
|--local_test.sh                    # script for local test
|--prepare_submission.sh            # script for submission
|--environments.txt                 # packages installed in remote envrionment
`--requirements.txt                 # demo requirements

3. Quick start

3.1 Environment setup

The THPO-Kit program toolkit is written in python3. The program dependency packages are in requirements.txt, and the dependency packages needs to be installed to execute scripts. Use pip3 to install the dependency package:

pip3 install -r requirements.txt

3.2 Create a searcher

  1. Refer to example_randon_searcher, create a new directory my_algo for your algorithm
  2. Create a new searcher.py file in the my_algo directory
  3. Implement your own Searcher class in the searcher.py file (the file name and class name are not allowed to be customized)
  4. Implement __init__ and suggest functions
  5. Modify local_test.sh and change SEARCHER to my_algo
  6. Execute the local_test.sh script to get the results of the algorithm

Step 1 - Step 2:[root folder]

|--my_algo
|  |--requirements.txt
|  `--searcher.py 
|--local_test.sh

Step 3 - Step 4:[searcher.py]

# MUST import AbstractSearcher from thpo.abstract_searcher
from thpo.abstract_searcher import AbstractSearcher
from random import randint

class Searcher(AbstractSearcher):
    searcher_name = "RandomSearcher"

    def __init__(self, parameters_config, n_iter, n_suggestion):
        AbstractSearcher.__init__(self, 
                                  parameters_config, 
                                  n_iter,
                                  n_suggestion)

    def suggest(self, suggestion_history, n_suggestions=1):
        next_suggestions = []
        for i in range(n_suggestions):
            next_suggest = {
                name: 
                conf["coords"][randint(0,len(conf["coords"])-1)]
                for name, conf in self.parameters_config.items()
            }
            next_suggestions.append(next_suggest)
        return next_suggestions

Step 5:[local_test.sh]

SEARCHER="my_algo"

3.3 Local test

Execute the script local_test.sh for local evaluation

./local_test.sh

Execution output:

====================== run search result ========================
 err_code:  0  err_msg:  
========================= iters means ===========================
func: data-2 iteration best: [25.24271821 26.36435157 12.77928619 10.19180929 11.3147711  10.17430656
 12.77928619 27.79752169 26.36793589 11.12007615]
func: data-30 iteration best: [-0.95264345 -0.27725879 -0.36873091 -0.68088963 -0.28840479 -0.50006427
 -0.32088949 -0.78627201 -0.53204227 -0.98427191]
========================= fianl score ============================
example_bayesian_optimization final score:  0.47173337831255463
==================================================================

3.4 Submission

Use prepare_submission.sh script to create a zip file, and submit the zip file to competition website Code submission entry.

./prepare_submission.sh example_random_searcher

Execution output:

upload_example_random_searcher_08131917
  adding: requirements.txt (stored 0%)
  adding: searcher.py (deflated 66%)
----------------------------------------------------------------
Built achive for upload
Archive:  ./upload_example_random_searcher_08131917.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
        0  08-13-2021 19:17   requirements.txt
     3767  08-13-2021 19:17   searcher.py
---------                     -------
     3767                     2 files
For scoring, upload upload_example_random_searcher_08131917.zip at address:
https://algo.browser.qq.com/
Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training

Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training Code for our paper "Predicting lncRNA–protein interactio

zhanglabNKU 1 Nov 29, 2022
TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning

TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning Authors: Yixuan Su, Fangyu Liu, Zaiqiao Meng, Lei Shu, Ehsan Shareghi, and Nig

Yixuan Su 79 Nov 04, 2022
TabNet for fastai

TabNet for fastai This is an adaptation of TabNet (Attention-based network for tabular data) for fastai (=2.0) library. The original paper https://ar

Mikhail Grankin 116 Oct 21, 2022
Official code for the publication "HyFactor: Hydrogen-count labelled graph-based defactorization Autoencoder".

HyFactor Graph-based architectures are becoming increasingly popular as a tool for structure generation. Here, we introduce a novel open-source archit

Laboratoire-de-Chemoinformatique 11 Oct 10, 2022
Algorithmic trading with deep learning experiments

Deep-Trading Algorithmic trading with deep learning experiments. Now released part one - simple time series forecasting. I plan to implement more soph

Alex Honchar 1.4k Jan 02, 2023
《Image2Reverb: Cross-Modal Reverb Impulse Response Synthesis》(2021)

Image2Reverb Image2Reverb is an end-to-end neural network that generates plausible audio impulse responses from single images of acoustic environments

Nikhil Singh 48 Nov 27, 2022
Tensorflow Implementation of the paper "Spectral Normalization for Generative Adversarial Networks" (ICML 2017 workshop)

tf-SNDCGAN Tensorflow implementation of the paper "Spectral Normalization for Generative Adversarial Networks" (https://www.researchgate.net/publicati

Nhat M. Nguyen 248 Nov 25, 2022
Keep CALM and Improve Visual Feature Attribution

Keep CALM and Improve Visual Feature Attribution Jae Myung Kim1*, Junsuk Choe1*, Zeynep Akata2, Seong Joon Oh1† * Equal contribution † Corresponding a

NAVER AI 90 Dec 07, 2022
Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting (ICCV, 2021)

DKPNet ICCV 2021 Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting Baseline of DKPNet is availa

19 Oct 14, 2022
Our VMAgent is a platform for exploiting Reinforcement Learning (RL) on Virtual Machine (VM) scheduling tasks.

VMAgent is a platform for exploiting Reinforcement Learning (RL) on Virtual Machine (VM) scheduling tasks. VMAgent is constructed based on one month r

56 Dec 12, 2022
Code release for paper: The Boombox: Visual Reconstruction from Acoustic Vibrations

The Boombox: Visual Reconstruction from Acoustic Vibrations Boyuan Chen, Mia Chiquier, Hod Lipson, Carl Vondrick Columbia University Project Website |

Boyuan Chen 12 Nov 30, 2022
Ludwig is a toolbox that allows to train and evaluate deep learning models without the need to write code.

Translated in 🇰🇷 Korean/ Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. It is built on

Ludwig 8.7k Dec 31, 2022
kapre: Keras Audio Preprocessors

Kapre Keras Audio Preprocessors - compute STFT, ISTFT, Melspectrogram, and others on GPU real-time. Tested on Python 3.6 and 3.7 Why Kapre? vs. Pre-co

Keunwoo Choi 867 Dec 29, 2022
[CIKM 2019] Code and dataset for "Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Prediction"

FiGNN for CTR prediction The code and data for our paper in CIKM2019: Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Predicti

Big Data and Multi-modal Computing Group, CRIPAC 75 Dec 30, 2022
some classic model used to segment the medical images like CT、X-ray and so on

github_project This is a project for medical image segmentation. This project includes common medical image segmentation models such as U-net, FCN, De

2 Mar 30, 2022
Blind visual quality assessment on 360° Video based on progressive learning

Blind visual quality assessment on omnidirectional or 360 video (ProVQA) Blind VQA for 360° Video via Progressively Learning from Pixels, Frames and V

5 Jan 06, 2023
E-RAFT: Dense Optical Flow from Event Cameras

E-RAFT: Dense Optical Flow from Event Cameras This is the code for the paper E-RAFT: Dense Optical Flow from Event Cameras by Mathias Gehrig, Mario Mi

Robotics and Perception Group 71 Dec 12, 2022
OpenVisionAPI server

🚀 Quick start An instance of ova-server is free and publicly available here: https://api.openvisionapi.com Checkout ova-client for a quick demo. Inst

Open Vision API 93 Nov 24, 2022
Extracts essential Mediapipe face landmarks and arranges them in a sequenced order.

simplified_mediapipe_face_landmarks Extracts essential Mediapipe face landmarks and arranges them in a sequenced order. The default 478 Mediapipe face

Irfan 13 Oct 04, 2022
Parsing, analyzing, and comparing source code across many languages

Semantic semantic is a Haskell library and command line tool for parsing, analyzing, and comparing source code. In a hurry? Check out our documentatio

GitHub 8.6k Dec 28, 2022