Generalized and Efficient Blackbox Optimization System.

Overview


license Build Status Issues Bugs Pull Requests Version Join the chat at https://gitter.im/bbo-open-box Documentation Status

OpenBox Doc | OpenBox中文文档

OpenBox: Generalized and Efficient Blackbox Optimization System

OpenBox is an efficient and generalized blackbox optimization (BBO) system, which supports the following characteristics: 1) BBO with multiple objectives and constraints, 2) BBO with transfer learning, 3) BBO with distributed parallelization, 4) BBO with multi-fidelity acceleration and 5) BBO with early stops. OpenBox is designed and developed by the AutoML team from the DAIR Lab at Peking University, and its goal is to make blackbox optimization easier to apply both in industry and academia, and help facilitate data science.

Software Artifacts

Standalone Python package.

Users can install the released package and use it using Python.

Distributed BBO service.

We adopt the "BBO as a service" paradigm and implement OpenBox as a managed general service for black-box optimization. Users can access this service via REST API conveniently, and do not need to worry about other issues such as environment setup, software maintenance, programming, and optimization of the execution. Moreover, we also provide a Web UI, through which users can easily track and manage the tasks.

Design Goal

The design of OpenBox follows the following principles:

  • Ease of use: Minimal user effort, and user-friendly visualization for tracking and managing BBO tasks.
  • Consistent performance: Host state-of-the-art optimization algorithms; Choose the proper algorithm automatically.
  • Resource-aware management: Give cost-model-based advice to users, e.g., minimal workers or time-budget.
  • Scalability: Scale to dimensions on the number of input variables, objectives, tasks, trials, and parallel evaluations.
  • High efficiency: Effective use of parallel resources, system optimization with transfer-learning and multi-fidelities, etc.
  • Fault tolerance, extensibility, and data privacy protection.

Links

OpenBox Capabilities in a Glance

Build-in Optimization Components Optimization Algorithms Optimization Services
  • Surrogate Model
    • Gaussian Process
    • TPE
    • Probabilistic Random Forest
    • LightGBM
  • Acquisition Function
    • EI
    • PI
    • UCB
    • MES
    • EHVI
    • TS
  • Acquisition Optimizer
    • Random Search
    • Local Search
    • Interleaved RS and LS
    • Differential Evolution
    • L-BFGS-B
  • Random Search
  • SMAC
  • GP based Optimizer
  • TPE
  • Hyperband
  • BOHB
  • MFES-HB
  • Anneal
  • PBT
  • Regularized EA
  • NSGA-II

Installation

System Requirements

Installation Requirements:

  • Python >= 3.6 (Python 3.7 is recommended!)

Supported Systems:

  • Linux (Ubuntu, ...)
  • macOS
  • Windows

We strongly suggest you to create a Python environment via Anaconda:

conda create -n openbox3.7 python=3.7
conda activate openbox3.7

Then update your pip and setuptools as follows:

pip install pip setuptools --upgrade

Installation from PyPI

To install OpenBox from PyPI:

pip install openbox

Manual Installation from Source

To install the newest OpenBox package, just type the following scripts on the command line:

(Python >= 3.7 only. For Python == 3.6, please see our Installation Guide Document)

git clone https://github.com/thomas-young-2013/open-box.git && cd open-box
cat requirements/main.txt | xargs -n 1 -L 1 pip install
python setup.py install --user --prefix=

For more details about installation instructions, please refer to the Installation Guide Document.

Quick Start

A quick start example is given by:

import numpy as np
from openbox import Optimizer, sp

# Define Search Space
space = sp.Space()
x1 = sp.Real("x1", -5, 10, default_value=0)
x2 = sp.Real("x2", 0, 15, default_value=0)
space.add_variables([x1, x2])

# Define Objective Function
def branin(config):
    x1, x2 = config['x1'], config['x2']
    y = (x2-5.1/(4*np.pi**2)*x1**2+5/np.pi*x1-6)**2+10*(1-1/(8*np.pi))*np.cos(x1)+10
    return y

# Run
if __name__ == '__main__':
    opt = Optimizer(branin, space, max_runs=50, task_id='quick_start')
    history = opt.run()
    print(history)

The example with multi-objectives and constraints is as follows:

from openbox import Optimizer, sp

# Define Search Space
space = sp.Space()
x1 = sp.Real("x1", 0.1, 10.0)
x2 = sp.Real("x2", 0.0, 5.0)
space.add_variables([x1, x2])

# Define Objective Function
def CONSTR(config):
    x1, x2 = config['x1'], config['x2']
    y1, y2 = x1, (1.0 + x2) / x1
    c1, c2 = 6.0 - 9.0 * x1 - x2, 1.0 - 9.0 * x1 + x2
    return dict(objs=[y1, y2], constraints=[c1, c2])

# Run
if __name__ == "__main__":
    opt = Optimizer(CONSTR, space, num_objs=2, num_constraints=2,
                    max_runs=50, ref_point=[10.0, 10.0], task_id='moc')
    opt.run()
    print(opt.get_history().get_pareto())

More Examples:

Enterprise Users

Releases and Contributing

OpenBox has a frequent release cycle. Please let us know if you encounter a bug by filling an issue.

We appreciate all contributions. If you are planning to contribute any bug-fixes, please do so without further discussions.

If you plan to contribute new features, new modules, etc. please first open an issue or reuse an existing issue, and discuss the feature with us.

To learn more about making a contribution to OpenBox, please refer to our How-to contribution page.

We appreciate all contributions and thank all the contributors!

Feedback

Related Projects

Targeting at openness and advancing AutoML ecosystems, we had also released few other open source projects.

  • MindWare: an open source system that provides end-to-end ML model training and inference capabilities.

Related Publications

OpenBox: A Generalized Black-box Optimization Service Yang Li, Yu Shen, Wentao Zhang, Yuanwei Chen, Huaijun Jiang, Mingchao Liu, Jiawei Jiang, Jinyang Gao, Wentao Wu, Zhi Yang, Ce Zhang, Bin Cui; ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2021). https://arxiv.org/abs/2106.00421

MFES-HB: Efficient Hyperband with Multi-Fidelity Quality Measurements Yang Li, Yu Shen, Jiawei Jiang, Jinyang Gao, Ce Zhang, Bin Cui; The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2021). https://arxiv.org/abs/2012.03011

License

The entire codebase is under MIT license.

Comments
  • 参数取值为集合中的数据

    参数取值为集合中的数据

    我想使参数C1 在集合 {3.6, 4.0, 4.4, 4.8, 5.2, 5.6, 6.0, 7.0, 8.0}中任取一个,我尝试这样写:C1 = sp.Categorical("C1", [3.6, 4.0, 4.4, 4.8, 5.2, 5.6, 6.0, 7.0, 8.0]),但应该出问题了,想请问下应该如何书写

    opened by nebula303 10
  • Documentation: example of transfer learning

    Documentation: example of transfer learning

    I can see there's a page in the docs about the transfer learning feature, however, I wasn't able to find a clear example of how to do this with OpenBox.

    documentation enhancement 
    opened by bbudescu 7
  • Constraints seem not to work.

    Constraints seem not to work.

    1

    import torch
    from torch import nn
    from torch.utils.data import DataLoader
    from torchvision import datasets
    from torchvision.transforms import ToTensor
    
    class NeuralNetwork(nn.Module):
        def __init__(self, feature_dims1=512, feature_dims2=256, feature_dims3=128):
            super(NeuralNetwork, self).__init__()
            self.flatten = nn.Flatten()
            self.linear_relu_stack = nn.Sequential(
                nn.Linear(28*28, feature_dims1),
                nn.ReLU(),
                nn.Linear(feature_dims1, feature_dims2),
                nn.ReLU(),
                nn.Linear(feature_dims2, feature_dims3),
                nn.ReLU(),
                nn.Linear(feature_dims3, 10),
            )
    
        def forward(self, x):
            x = self.flatten(x)
            logits = self.linear_relu_stack(x)
            return logits
    
    def train_loop(dataloader, model, loss_fn, optimizer, device):
        size = len(dataloader.dataset)
        for batch, (X, y) in enumerate(dataloader):
            X = X.to(device)
            y = y.to(device)
    
            # Compute prediction and loss
            pred = model(X)
            loss = loss_fn(pred, y)
    
            # Backpropagation
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
    
            if batch % 100 == 0:
                loss, current = loss.item(), batch * len(X)
                print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")
    
    def test_loop(dataloader, model, loss_fn, device):
        size = len(dataloader.dataset)
        num_batches = len(dataloader)
        test_loss, correct = 0, 0
    
        with torch.no_grad():
            for X, y in dataloader:
                X = X.to(device)
                y = y.to(device)
    
                pred = model(X)
                test_loss += loss_fn(pred, y).item()
                correct += (pred.argmax(1) == y).type(torch.float).sum().item()
    
        test_loss /= num_batches
        correct /= size
        print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
        
        return correct
    
    training_data = datasets.FashionMNIST(
        root="data",
        train=True,
        download=True,
        transform=ToTensor()
    )
    
    test_data = datasets.FashionMNIST(
        root="data",
        train=False,
        download=True,
        transform=ToTensor()
    )
    
    from openbox import sp
    
    def get_configspace():
        space = sp.Space()
        learning_rate = sp.Real("learning_rate", 1e-3, 0.3, default_value=0.1, log=True)
        batch_size = sp.Int("batch_size", 32, 64)
        feature_dims1 = sp.Int("feature_dims1", 256, 512)
        feature_dims2 = sp.Int("feature_dims2", 256, 512)
        feature_dims3 = sp.Int("feature_dims3", 256, 512)
        space.add_variables([learning_rate, batch_size, feature_dims1, feature_dims2, feature_dims3])    
        return space
    
    def objective_function(config: sp.Configuration):
        params = config.get_dictionary()
        params['epochs'] = 10
        
        train_dataloader = DataLoader(training_data, batch_size=params['batch_size'])
        test_dataloader = DataLoader(test_data, batch_size=params['batch_size'])
        
        device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")    
        model = NeuralNetwork(
            feature_dims1=params['feature_dims1'], 
            feature_dims2=params['feature_dims2'],
            feature_dims3=params['feature_dims3']
        )
        model = model.to(device)
    
        loss_fn = nn.CrossEntropyLoss()
    
        optimizer = torch.optim.SGD(model.parameters(), lr=params['learning_rate'])
        
        for epoch in range(params['epochs']):
            print(f"Epoch {epoch+1}\n-------------------------------")
            train_loop(train_dataloader, model, loss_fn, optimizer, device)
            correct = test_loop(test_dataloader, model, loss_fn, device)
        
        result = dict()
        result['objs'] = [1-correct, ]
        result['constraints'] = [
            params['feature_dims2'] - params['feature_dims1'], 
            params['feature_dims3'] - params['feature_dims2'], 
        ]
        
        return result
    
    from openbox import Optimizer
    
    # Run
    opt = Optimizer(
        objective_function,
        get_configspace(),
        num_objs=1,
        num_constraints=2,
        max_runs=10,
        surrogate_type='prf',
        time_limit_per_trial=180,
        task_id='hpo',
    )
    history = opt.run()
    
    history = opt.get_history()
    print(history)
    
    history.plot_convergence()
    
    history.visualize_jupyter()
    
    print(history.get_importance())
    

    I add 2 constrains in results from objective_function, hope "feature_dims1 > feature_dims2 > feature_dims3", but it seems not to work as expect.

    enhancement good first issue 
    opened by Aiuan 4
  • How to plot the gaussian process regression approximation?

    How to plot the gaussian process regression approximation?

    Hi there,

    I was trying to plot the gaussian process model using the function advisor.surrogate_model.predict(). But it gives predictions far away from the sample points and objective function.

    image image

    The advisor itself seems to be working fine with output close to the minimum, but the fit surface doesn't look right. What's the proper way of plotting the gaussian process fit?

    Cheers

    question 
    opened by Dalton2333 4
  • How do I pass kwargs to objective function?

    How do I pass kwargs to objective function?

    Hi there,

    I was trying to pass all the parameters to the objective function, this includes both varying parameters to be determined and fixed parameters. I tried to use Latin Hypercube for initial sampling, and it reminds me that only int and float can be used in Latin Hypercube (constant and categorical are not accepted). So I tried to split the parameters into two parts and pass them into the objective function separately. Then I found in optimizer.parallel_smbo.wrapper() the kwargs is set to dict() and I can't pass my fixed parameters into the objective function.

    Was I missing something? How do I pass additional kwargs to the objective function? I would appreciate a relevant example.

    Cheers!

    enhancement 
    opened by Dalton2333 4
  • OpenBox对类别型参数进行参数重要性排序时报错

    OpenBox对类别型参数进行参数重要性排序时报错

    Describe the bug A clear and concise description of what the bug is. e.g., on which version, xxx fails. 在使用OpenBox对参数重要性排序这个功能时,我输入了一列类别型参数,程序在完成调优后无法输出参数重要性排序,会报错。具体来说,我编写了一个测试压缩算法效率的demo。参数为COMPRESS_LEVEL(整数型),COMPRESS_METHOD(类别型)。其中整数型参数取值为1~9,类别型参数取值为["bz2", "zlib", "gzip"]。具体做法是测试不同COMPRESS_LEVEL和COMPRESS_METHOD组合下,对文件的压缩效率。评价指标分别为压缩时间和压缩率(压缩后文件大小与压缩前文件大小之比)。优化器参数设置如下:
    opt = Optimizer( compress, space, num_objs=2, num_constraints=0, max_runs=50, surrogate_type='auto', acq_type='ehvi', time_limit_per_trial=30, task_id='compress', ref_point=[6.97, 2.74],) 使用OpenBox时可以正常调优,但是在使用history.get_importance()函数时,无法对参数重要性进行评估,会报类型错误!

    To Reproduce Steps to reproduce the behavior: 具体错误为:numpy.core._exceptions._UFuncNoLoopError: ufunc 'maximum' did not contain a loop with signature matching types (dtype('<U11'), dtype('<U11')) -> None image

    Expected behavior 可以正常输出特征重要性排名

    Outputs and Logs If applicable, add outputs and logs to help explain your problem. 具体调优日志如下: OpenBox-compress-logs.log

    Additional context 当我对类型行特征中的字符串数据转换为数字类型后,该功能又可以正常使用。所以OpenBox是否不支持对类别型参数如字符串等数据进行重要性排序呢?

    bug 
    opened by znzjugod 4
  • self.config_advisor.get_suggestion()  is running slower and slower,even once every tens of seconds

    self.config_advisor.get_suggestion() is running slower and slower,even once every tens of seconds

    您好,我启动了多个进程openbox实例,总进程数量是小于cpu核心数量的,每个openbox都是单进程模式Optimizer(....),max_runs 设置的是500次,运行时发现,在前几十次时速度还是比较快的,但到后面会越来越慢,运行到100-300轮的时候,每次获取参数都要几十秒的时间,请问这是什么原因,怎么解决,多谢? Hello, I have started several OPENBOX instances. The total number of processes is less than the number of CPU cores. Each OPENBOX is a single process mode optimizer (...), max_ Runs is set to 500 times. It is found that the speed is still relatively fast in the first dozens of times, but it will be slower and slower later. When running to 100-300 rounds, it takes tens of seconds to obtain parameters every time. What is the reason and how to solve it? Thank you?

    opened by jifeiran 4
  • How many hyperparams can open box can handle?

    How many hyperparams can open box can handle?

    Hi, if there are hundreds of hyperparams, can I use open-box to get a good result? And is there a empirical relationship between run times and number of hyperparams?

    opened by yuzizbk 3
  • 为什么我一直装不上pyrfr,装了两天了

    为什么我一直装不上pyrfr,装了两天了

    Collecting pyrfr==0.7.0 Downloading pyrfr-0.7.0.tar.gz (290 kB) -------------------------------------- 290.7/290.7 kB 1.8 MB/s eta 0:00:00 Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Building wheels for collected packages: pyrfr Building wheel for pyrfr (setup.py): started Building wheel for pyrfr (setup.py): finished with status 'error' Running setup.py clean for pyrfr Failed to build pyrfr Installing collected packages: pyrfr Running setup.py install for pyrfr: started Running setup.py install for pyrfr: finished with status 'error'

    error: subprocess-exited-with-error

    python setup.py bdist_wheel did not run successfully. exit code: 1

    [6 lines of output] warning: build_py: byte-compiling is disabled, skipping.

    cl: 命令行 warning D9002 :忽略未知选项“-std=c++11” regression_wrap.cpp C:\Users\16406\AppData\Local\Programs\Python\Python310\include\pyconfig.h(59): fatal error C1083: 无法打开包括文件: “io.h”: No such file or directory error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe' failed with exit code 2 [end of output]

    note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pyrfr error: subprocess-exited-with-error

    Running setup.py install for pyrfr did not run successfully. exit code: 1

    [6 lines of output] C:\Users\16406\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( cl: 命令行 warning D9002 :忽略未知选项“-std=c++11” regression_wrap.cpp C:\Users\16406\AppData\Local\Programs\Python\Python310\include\pyconfig.h(59): fatal error C1083: 无法打开包括文件: “io.h”: No such file or directory error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe' failed with exit code 2 [end of output]

    note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure

    Encountered error while trying to install package.

    pyrfr

    note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure.

    opened by BaihtSYSU 2
  • Is there any way to set an equality constraint?

    Is there any way to set an equality constraint?

    Describe the bug

    import math
    import numpy as np
    from math import prod, fabs
    from openbox import sp
    
    
    def mishra(config: sp.Configuration):
        config_dict = config.get_dictionary()
        ws = np.array([config_dict['w%d' % i] for i in range(3)])
        # if fabs(sum(ws)-1)>1e-6:
        #     return {
        #         "objs":[math.inf,],
        #     }
    
        # if sum(ws) > 1:
        #     return {
        #         "objs": [math.inf, ]
        #     }
    
        result = dict()
        result['objs'] = [-1 * (ws[0] * 2 + ws[1] * 3 + ws[2]), ]
        # ?
        result["constraints"] = [fabs(sum(ws) - 1) - 1e-6, ]
        return result
    
    
    N = 3
    df = 1 / N
    params = {
        'float': {
            f"w{i}": (0, 1, df) for i in range(N)
        }
    }
    space = sp.Space()
    space.add_variables([
        sp.Real(name, *para) for name, para in params['float'].items()
    ])
    
    from openbox import Optimizer
    
    opt = Optimizer(
        mishra,
        space,
        num_objs=1,
        num_constraints=1,
        surrogate_type='gp',
        acq_optimizer_type='random_scipy',
        max_runs=50,
        time_limit_per_trial=10,
        task_id='soc',
    )
    history = opt.run()
    
    print(history)
    
    

    Here is my code, I want the sum of these three parameters to be 1, I tried adding constraints of both 1-sum(ws) and sum(ws)-1 but it doesn't work

    opened by Zeng1998 2
  • Is there some side effects when using ParallelOptimizer?

    Is there some side effects when using ParallelOptimizer?

    At the beginning, I only launch 12 parallel workers. Now for getting result faster,I launch 30 parallel workers. But I find that the result is getting worse in same searching times. Is this as expected?

    By the way, the performance of different workers is different. Some runs very faster, maybe triple faster than the slower workers.

    question 
    opened by yuzizbk 2
  • Docs & Examples for Multi-Fidelity / Early Stopping

    Docs & Examples for Multi-Fidelity / Early Stopping

    E.g., is the resource dimension (e.g., the number of epochs or the number of samples to train) treated as just another hyperparameter? How can one do cross-task transfer learning with multi-fidelity, e.g., how does one report xval accuracy at every epoch for previous trials?

    opened by bbudescu 3
  • Support for conditional (nested, hierarichical) parameter space

    Support for conditional (nested, hierarichical) parameter space

    Does OpenBox support sampling some parameters only when a certain value has been sampled by some other parameter?

    E.g., for a neural net, don't sample layer3_n_filters, layer3_filter_w, layer3_filter_h if we only have 2 layers in the net (e.g., n_layers == 2).

    Or is there another way to address this, e.g., can one just signal that a particular parameter combination is invalid and quit early? Does OpenBox train a separate classification model that tries to predict feasibility for each combination? Should one use unequality constraints? What is the impact on the efficiency of exploring the search space? E.g., does it assign a high cost to unfeasible combinations, and, if so, does this mean it will also assign a high prior cost to combinations on the edge of feasibility?

    In short, how can one treat conditional search spaces?

    documentation 
    opened by bbudescu 4
  • How to disable the function to print information?

    How to disable the function to print information?

    The project is wonderful! It helps me a lot! But I don't know how to disable the function to print information, i.e. image

    I don't want to print the information which makes my results hard to observe. So is there any method to solve it?

    enhancement 
    opened by shengzeang 2
Releases(v0.8.0)
  • v0.8.0(Dec 18, 2022)

    Highlights

    • Add HTML visualization for the optimization process (#48).
      • Provide basic charts for objectives and constraints.
      • Provide advanced functions, including surrogate fitting analysis and hyperparameter importance analysis.
    • Update transfer learning (#54).
      • API change: for transfer learning data, user should provide a List[History] as transfer_learning_history, instead of a OrderedDict[config, perf] as history_bo_data (#54, 4641d7cf).
      • Examples and docs are updated.
    • Refactor History object (0bce5800).
      • Rename HistoryContainer to History.
      • Simplify data structure and provide convenient APIs.
      • Rewrite all methods, including data obtaining, plotting, saving/loading, etc.

    Backwards Incompatible Changes

    • API change: objs are renamed to objectives. num_objs are renamed to num_objectives (ecd5928a).
    • Change objective value of failed trials from MAXINT to np.inf (da88bd24).
    • Drop support for Python 3.6 (end of life on Dec 23, 2021).

    Other Changes

    • Add BlendSearch, LineBO and SafeOpt (experimental) (#40).
    • Add color logger. Provide fine-grained control of logging options (e.g., log level).
    • Rewrite python packaging of the project (#55).
    • Update Markdown parser in docs to myst-parser. recommonmark is deprecated.
    • Add pytest for examples.
    • Use GitHub Actions for CI/CD.

    Bug Fixes

    • Fix error return type of generic advisor and update sampler (Thanks @yezoli) (#44).
    • Consider constraints in plot_convergence (#47).
    Source code(tar.gz)
    Source code(zip)
  • v0.7.18(Nov 14, 2022)

    • Add ConditionedSpace to support complex conditions between hyperparameters (https://github.com/PKU-DAIR/open-box/issues/37).
    • Numerous bug fixes.
    Source code(tar.gz)
    Source code(zip)
Owner
DAIR Lab
Data and Intelligence Research (DAIR) Lab @ Peking University
DAIR Lab
Dynamic Token Normalization Improves Vision Transformers

Dynamic Token Normalization Improves Vision Transformers This is the PyTorch implementation of the paper Dynamic Token Normalization Improves Vision T

Wenqi Shao 20 Oct 09, 2022
Sequential model-based optimization with a `scipy.optimize` interface

Scikit-Optimize Scikit-Optimize, or skopt, is a simple and efficient library to minimize (very) expensive and noisy black-box functions. It implements

Scikit-Optimize 2.5k Jan 04, 2023
Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

Prompt-Tuning Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning" Currently, we support the following huggigface models: Bart

Andrew Zeng 36 Dec 19, 2022
Proximal Backpropagation - a neural network training algorithm that takes implicit instead of explicit gradient steps

Proximal Backpropagation Proximal Backpropagation (ProxProp) is a neural network training algorithm that takes implicit instead of explicit gradient s

Thomas Frerix 40 Dec 17, 2022
SIEM Logstash parsing for more than hundred technologies

LogIndexer Pipeline Logstash Parsing Configurations for Elastisearch SIEM and OpenDistro for Elasticsearch SIEM Why this project exists The overhead o

146 Dec 29, 2022
The most simple and minimalistic navigation dashboard.

Navigation This project follows a goal to have simple and lightweight dashboard with different links. I use it to have my own self-hosted service dash

Yaroslav 23 Dec 23, 2022
113 Nov 28, 2022
Pytorch for Segmentation

Pytorch for Semantic Segmentation This repo has been deprecated currently and I will not maintain it. Meanwhile, I strongly recommend you can refer to

ycszen 411 Nov 22, 2022
《LXMERT: Learning Cross-Modality Encoder Representations from Transformers》(EMNLP 2020)

The Most Important Thing. Our code is developed based on: LXMERT: Learning Cross-Modality Encoder Representations from Transformers

53 Dec 16, 2022
Code for approximate graph reduction techniques for cardinality-based DSFM, from paper

SparseCard Code for approximate graph reduction techniques for cardinality-based DSFM, from paper "Approximate Decomposable Submodular Function Minimi

Nate Veldt 1 Nov 25, 2022
A New Open-Source Off-road Environment for Benchmark Generalization of Autonomous Driving

A New Open-Source Off-road Environment for Benchmark Generalization of Autonomous Driving Isaac Han, Dong-Hyeok Park, and Kyung-Joong Kim IEEE Access

13 Dec 27, 2022
Code and data for ACL2021 paper Cross-Lingual Abstractive Summarization with Limited Parallel Resources.

Multi-Task Framework for Cross-Lingual Abstractive Summarization (MCLAS) The code for ACL2021 paper Cross-Lingual Abstractive Summarization with Limit

Yu Bai 43 Nov 07, 2022
🏅 The Most Comprehensive List of Kaggle Solutions and Ideas 🏅

🏅 Collection of Kaggle Solutions and Ideas 🏅

Farid Rashidi 2.3k Jan 08, 2023
Supervised Classification from Text (P)

MSc-Thesis Module: Masters Research Thesis Language: Python Grade: 75 Title: An investigation of supervised classification of therapeutic process from

Matthew Laws 1 Nov 22, 2021
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Jake YANG 62 Nov 21, 2022
Neurolab is a simple and powerful Neural Network Library for Python

Neurolab Neurolab is a simple and powerful Neural Network Library for Python. Contains based neural networks, train algorithms and flexible framework

152 Dec 06, 2022
Motion and Shape Capture from Sparse Markers

MoSh++ This repository contains the official chumpy implementation of mocap body solver used for AMASS: AMASS: Archive of Motion Capture as Surface Sh

Nima Ghorbani 135 Dec 23, 2022
Extracting and filtering paraphrases by bridging natural language inference and paraphrasing

nli2paraphrases Source code repository accompanying the preprint Extracting and filtering paraphrases by bridging natural language inference and parap

Matej Klemen 1 Mar 09, 2022
Rate-limit-semaphore - Semaphore implementation with rate limit restriction for async-style (any core)

Rate Limit Semaphore Rate limit semaphore for async-style (any core) There are t

Yan Kurbatov 4 Jun 21, 2022
学习 python3 以来写的一些垃圾玩具……

和东哥做兄弟 Author: chiupam 版权 未经本人同意,仓库内所有资源文件,禁止任何公众号、自媒体、开发者进行任何形式的转载、发布、搬运。 声明 这不是一个开源项目,只是把 GitHub 当作一个代码的存储空间,本项目不接受任何开源要求。 仅用于学习研究,禁止用于商业用途,不能保证其合法性

Chiupam 67 Mar 26, 2022