Generalized and Efficient Blackbox Optimization System.

Overview


license Build Status Issues Bugs Pull Requests Version Join the chat at https://gitter.im/bbo-open-box Documentation Status

OpenBox Doc | OpenBox中文文档

OpenBox: Generalized and Efficient Blackbox Optimization System

OpenBox is an efficient and generalized blackbox optimization (BBO) system, which supports the following characteristics: 1) BBO with multiple objectives and constraints, 2) BBO with transfer learning, 3) BBO with distributed parallelization, 4) BBO with multi-fidelity acceleration and 5) BBO with early stops. OpenBox is designed and developed by the AutoML team from the DAIR Lab at Peking University, and its goal is to make blackbox optimization easier to apply both in industry and academia, and help facilitate data science.

Software Artifacts

Standalone Python package.

Users can install the released package and use it using Python.

Distributed BBO service.

We adopt the "BBO as a service" paradigm and implement OpenBox as a managed general service for black-box optimization. Users can access this service via REST API conveniently, and do not need to worry about other issues such as environment setup, software maintenance, programming, and optimization of the execution. Moreover, we also provide a Web UI, through which users can easily track and manage the tasks.

Design Goal

The design of OpenBox follows the following principles:

  • Ease of use: Minimal user effort, and user-friendly visualization for tracking and managing BBO tasks.
  • Consistent performance: Host state-of-the-art optimization algorithms; Choose the proper algorithm automatically.
  • Resource-aware management: Give cost-model-based advice to users, e.g., minimal workers or time-budget.
  • Scalability: Scale to dimensions on the number of input variables, objectives, tasks, trials, and parallel evaluations.
  • High efficiency: Effective use of parallel resources, system optimization with transfer-learning and multi-fidelities, etc.
  • Fault tolerance, extensibility, and data privacy protection.

Links

OpenBox Capabilities in a Glance

Build-in Optimization Components Optimization Algorithms Optimization Services
  • Surrogate Model
    • Gaussian Process
    • TPE
    • Probabilistic Random Forest
    • LightGBM
  • Acquisition Function
    • EI
    • PI
    • UCB
    • MES
    • EHVI
    • TS
  • Acquisition Optimizer
    • Random Search
    • Local Search
    • Interleaved RS and LS
    • Differential Evolution
    • L-BFGS-B
  • Random Search
  • SMAC
  • GP based Optimizer
  • TPE
  • Hyperband
  • BOHB
  • MFES-HB
  • Anneal
  • PBT
  • Regularized EA
  • NSGA-II

Installation

System Requirements

Installation Requirements:

  • Python >= 3.6 (Python 3.7 is recommended!)

Supported Systems:

  • Linux (Ubuntu, ...)
  • macOS
  • Windows

We strongly suggest you to create a Python environment via Anaconda:

conda create -n openbox3.7 python=3.7
conda activate openbox3.7

Then update your pip and setuptools as follows:

pip install pip setuptools --upgrade

Installation from PyPI

To install OpenBox from PyPI:

pip install openbox

Manual Installation from Source

To install the newest OpenBox package, just type the following scripts on the command line:

(Python >= 3.7 only. For Python == 3.6, please see our Installation Guide Document)

git clone https://github.com/thomas-young-2013/open-box.git && cd open-box
cat requirements/main.txt | xargs -n 1 -L 1 pip install
python setup.py install --user --prefix=

For more details about installation instructions, please refer to the Installation Guide Document.

Quick Start

A quick start example is given by:

import numpy as np
from openbox import Optimizer, sp

# Define Search Space
space = sp.Space()
x1 = sp.Real("x1", -5, 10, default_value=0)
x2 = sp.Real("x2", 0, 15, default_value=0)
space.add_variables([x1, x2])

# Define Objective Function
def branin(config):
    x1, x2 = config['x1'], config['x2']
    y = (x2-5.1/(4*np.pi**2)*x1**2+5/np.pi*x1-6)**2+10*(1-1/(8*np.pi))*np.cos(x1)+10
    return y

# Run
if __name__ == '__main__':
    opt = Optimizer(branin, space, max_runs=50, task_id='quick_start')
    history = opt.run()
    print(history)

The example with multi-objectives and constraints is as follows:

from openbox import Optimizer, sp

# Define Search Space
space = sp.Space()
x1 = sp.Real("x1", 0.1, 10.0)
x2 = sp.Real("x2", 0.0, 5.0)
space.add_variables([x1, x2])

# Define Objective Function
def CONSTR(config):
    x1, x2 = config['x1'], config['x2']
    y1, y2 = x1, (1.0 + x2) / x1
    c1, c2 = 6.0 - 9.0 * x1 - x2, 1.0 - 9.0 * x1 + x2
    return dict(objs=[y1, y2], constraints=[c1, c2])

# Run
if __name__ == "__main__":
    opt = Optimizer(CONSTR, space, num_objs=2, num_constraints=2,
                    max_runs=50, ref_point=[10.0, 10.0], task_id='moc')
    opt.run()
    print(opt.get_history().get_pareto())

More Examples:

Enterprise Users

Releases and Contributing

OpenBox has a frequent release cycle. Please let us know if you encounter a bug by filling an issue.

We appreciate all contributions. If you are planning to contribute any bug-fixes, please do so without further discussions.

If you plan to contribute new features, new modules, etc. please first open an issue or reuse an existing issue, and discuss the feature with us.

To learn more about making a contribution to OpenBox, please refer to our How-to contribution page.

We appreciate all contributions and thank all the contributors!

Feedback

Related Projects

Targeting at openness and advancing AutoML ecosystems, we had also released few other open source projects.

  • MindWare: an open source system that provides end-to-end ML model training and inference capabilities.

Related Publications

OpenBox: A Generalized Black-box Optimization Service Yang Li, Yu Shen, Wentao Zhang, Yuanwei Chen, Huaijun Jiang, Mingchao Liu, Jiawei Jiang, Jinyang Gao, Wentao Wu, Zhi Yang, Ce Zhang, Bin Cui; ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2021). https://arxiv.org/abs/2106.00421

MFES-HB: Efficient Hyperband with Multi-Fidelity Quality Measurements Yang Li, Yu Shen, Jiawei Jiang, Jinyang Gao, Ce Zhang, Bin Cui; The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2021). https://arxiv.org/abs/2012.03011

License

The entire codebase is under MIT license.

Comments
  • 参数取值为集合中的数据

    参数取值为集合中的数据

    我想使参数C1 在集合 {3.6, 4.0, 4.4, 4.8, 5.2, 5.6, 6.0, 7.0, 8.0}中任取一个,我尝试这样写:C1 = sp.Categorical("C1", [3.6, 4.0, 4.4, 4.8, 5.2, 5.6, 6.0, 7.0, 8.0]),但应该出问题了,想请问下应该如何书写

    opened by nebula303 10
  • Documentation: example of transfer learning

    Documentation: example of transfer learning

    I can see there's a page in the docs about the transfer learning feature, however, I wasn't able to find a clear example of how to do this with OpenBox.

    documentation enhancement 
    opened by bbudescu 7
  • Constraints seem not to work.

    Constraints seem not to work.

    1

    import torch
    from torch import nn
    from torch.utils.data import DataLoader
    from torchvision import datasets
    from torchvision.transforms import ToTensor
    
    class NeuralNetwork(nn.Module):
        def __init__(self, feature_dims1=512, feature_dims2=256, feature_dims3=128):
            super(NeuralNetwork, self).__init__()
            self.flatten = nn.Flatten()
            self.linear_relu_stack = nn.Sequential(
                nn.Linear(28*28, feature_dims1),
                nn.ReLU(),
                nn.Linear(feature_dims1, feature_dims2),
                nn.ReLU(),
                nn.Linear(feature_dims2, feature_dims3),
                nn.ReLU(),
                nn.Linear(feature_dims3, 10),
            )
    
        def forward(self, x):
            x = self.flatten(x)
            logits = self.linear_relu_stack(x)
            return logits
    
    def train_loop(dataloader, model, loss_fn, optimizer, device):
        size = len(dataloader.dataset)
        for batch, (X, y) in enumerate(dataloader):
            X = X.to(device)
            y = y.to(device)
    
            # Compute prediction and loss
            pred = model(X)
            loss = loss_fn(pred, y)
    
            # Backpropagation
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
    
            if batch % 100 == 0:
                loss, current = loss.item(), batch * len(X)
                print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")
    
    def test_loop(dataloader, model, loss_fn, device):
        size = len(dataloader.dataset)
        num_batches = len(dataloader)
        test_loss, correct = 0, 0
    
        with torch.no_grad():
            for X, y in dataloader:
                X = X.to(device)
                y = y.to(device)
    
                pred = model(X)
                test_loss += loss_fn(pred, y).item()
                correct += (pred.argmax(1) == y).type(torch.float).sum().item()
    
        test_loss /= num_batches
        correct /= size
        print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
        
        return correct
    
    training_data = datasets.FashionMNIST(
        root="data",
        train=True,
        download=True,
        transform=ToTensor()
    )
    
    test_data = datasets.FashionMNIST(
        root="data",
        train=False,
        download=True,
        transform=ToTensor()
    )
    
    from openbox import sp
    
    def get_configspace():
        space = sp.Space()
        learning_rate = sp.Real("learning_rate", 1e-3, 0.3, default_value=0.1, log=True)
        batch_size = sp.Int("batch_size", 32, 64)
        feature_dims1 = sp.Int("feature_dims1", 256, 512)
        feature_dims2 = sp.Int("feature_dims2", 256, 512)
        feature_dims3 = sp.Int("feature_dims3", 256, 512)
        space.add_variables([learning_rate, batch_size, feature_dims1, feature_dims2, feature_dims3])    
        return space
    
    def objective_function(config: sp.Configuration):
        params = config.get_dictionary()
        params['epochs'] = 10
        
        train_dataloader = DataLoader(training_data, batch_size=params['batch_size'])
        test_dataloader = DataLoader(test_data, batch_size=params['batch_size'])
        
        device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")    
        model = NeuralNetwork(
            feature_dims1=params['feature_dims1'], 
            feature_dims2=params['feature_dims2'],
            feature_dims3=params['feature_dims3']
        )
        model = model.to(device)
    
        loss_fn = nn.CrossEntropyLoss()
    
        optimizer = torch.optim.SGD(model.parameters(), lr=params['learning_rate'])
        
        for epoch in range(params['epochs']):
            print(f"Epoch {epoch+1}\n-------------------------------")
            train_loop(train_dataloader, model, loss_fn, optimizer, device)
            correct = test_loop(test_dataloader, model, loss_fn, device)
        
        result = dict()
        result['objs'] = [1-correct, ]
        result['constraints'] = [
            params['feature_dims2'] - params['feature_dims1'], 
            params['feature_dims3'] - params['feature_dims2'], 
        ]
        
        return result
    
    from openbox import Optimizer
    
    # Run
    opt = Optimizer(
        objective_function,
        get_configspace(),
        num_objs=1,
        num_constraints=2,
        max_runs=10,
        surrogate_type='prf',
        time_limit_per_trial=180,
        task_id='hpo',
    )
    history = opt.run()
    
    history = opt.get_history()
    print(history)
    
    history.plot_convergence()
    
    history.visualize_jupyter()
    
    print(history.get_importance())
    

    I add 2 constrains in results from objective_function, hope "feature_dims1 > feature_dims2 > feature_dims3", but it seems not to work as expect.

    enhancement good first issue 
    opened by Aiuan 4
  • How to plot the gaussian process regression approximation?

    How to plot the gaussian process regression approximation?

    Hi there,

    I was trying to plot the gaussian process model using the function advisor.surrogate_model.predict(). But it gives predictions far away from the sample points and objective function.

    image image

    The advisor itself seems to be working fine with output close to the minimum, but the fit surface doesn't look right. What's the proper way of plotting the gaussian process fit?

    Cheers

    question 
    opened by Dalton2333 4
  • How do I pass kwargs to objective function?

    How do I pass kwargs to objective function?

    Hi there,

    I was trying to pass all the parameters to the objective function, this includes both varying parameters to be determined and fixed parameters. I tried to use Latin Hypercube for initial sampling, and it reminds me that only int and float can be used in Latin Hypercube (constant and categorical are not accepted). So I tried to split the parameters into two parts and pass them into the objective function separately. Then I found in optimizer.parallel_smbo.wrapper() the kwargs is set to dict() and I can't pass my fixed parameters into the objective function.

    Was I missing something? How do I pass additional kwargs to the objective function? I would appreciate a relevant example.

    Cheers!

    enhancement 
    opened by Dalton2333 4
  • OpenBox对类别型参数进行参数重要性排序时报错

    OpenBox对类别型参数进行参数重要性排序时报错

    Describe the bug A clear and concise description of what the bug is. e.g., on which version, xxx fails. 在使用OpenBox对参数重要性排序这个功能时,我输入了一列类别型参数,程序在完成调优后无法输出参数重要性排序,会报错。具体来说,我编写了一个测试压缩算法效率的demo。参数为COMPRESS_LEVEL(整数型),COMPRESS_METHOD(类别型)。其中整数型参数取值为1~9,类别型参数取值为["bz2", "zlib", "gzip"]。具体做法是测试不同COMPRESS_LEVEL和COMPRESS_METHOD组合下,对文件的压缩效率。评价指标分别为压缩时间和压缩率(压缩后文件大小与压缩前文件大小之比)。优化器参数设置如下:
    opt = Optimizer( compress, space, num_objs=2, num_constraints=0, max_runs=50, surrogate_type='auto', acq_type='ehvi', time_limit_per_trial=30, task_id='compress', ref_point=[6.97, 2.74],) 使用OpenBox时可以正常调优,但是在使用history.get_importance()函数时,无法对参数重要性进行评估,会报类型错误!

    To Reproduce Steps to reproduce the behavior: 具体错误为:numpy.core._exceptions._UFuncNoLoopError: ufunc 'maximum' did not contain a loop with signature matching types (dtype('<U11'), dtype('<U11')) -> None image

    Expected behavior 可以正常输出特征重要性排名

    Outputs and Logs If applicable, add outputs and logs to help explain your problem. 具体调优日志如下: OpenBox-compress-logs.log

    Additional context 当我对类型行特征中的字符串数据转换为数字类型后,该功能又可以正常使用。所以OpenBox是否不支持对类别型参数如字符串等数据进行重要性排序呢?

    bug 
    opened by znzjugod 4
  • self.config_advisor.get_suggestion()  is running slower and slower,even once every tens of seconds

    self.config_advisor.get_suggestion() is running slower and slower,even once every tens of seconds

    您好,我启动了多个进程openbox实例,总进程数量是小于cpu核心数量的,每个openbox都是单进程模式Optimizer(....),max_runs 设置的是500次,运行时发现,在前几十次时速度还是比较快的,但到后面会越来越慢,运行到100-300轮的时候,每次获取参数都要几十秒的时间,请问这是什么原因,怎么解决,多谢? Hello, I have started several OPENBOX instances. The total number of processes is less than the number of CPU cores. Each OPENBOX is a single process mode optimizer (...), max_ Runs is set to 500 times. It is found that the speed is still relatively fast in the first dozens of times, but it will be slower and slower later. When running to 100-300 rounds, it takes tens of seconds to obtain parameters every time. What is the reason and how to solve it? Thank you?

    opened by jifeiran 4
  • How many hyperparams can open box can handle?

    How many hyperparams can open box can handle?

    Hi, if there are hundreds of hyperparams, can I use open-box to get a good result? And is there a empirical relationship between run times and number of hyperparams?

    opened by yuzizbk 3
  • 为什么我一直装不上pyrfr,装了两天了

    为什么我一直装不上pyrfr,装了两天了

    Collecting pyrfr==0.7.0 Downloading pyrfr-0.7.0.tar.gz (290 kB) -------------------------------------- 290.7/290.7 kB 1.8 MB/s eta 0:00:00 Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Building wheels for collected packages: pyrfr Building wheel for pyrfr (setup.py): started Building wheel for pyrfr (setup.py): finished with status 'error' Running setup.py clean for pyrfr Failed to build pyrfr Installing collected packages: pyrfr Running setup.py install for pyrfr: started Running setup.py install for pyrfr: finished with status 'error'

    error: subprocess-exited-with-error

    python setup.py bdist_wheel did not run successfully. exit code: 1

    [6 lines of output] warning: build_py: byte-compiling is disabled, skipping.

    cl: 命令行 warning D9002 :忽略未知选项“-std=c++11” regression_wrap.cpp C:\Users\16406\AppData\Local\Programs\Python\Python310\include\pyconfig.h(59): fatal error C1083: 无法打开包括文件: “io.h”: No such file or directory error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe' failed with exit code 2 [end of output]

    note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pyrfr error: subprocess-exited-with-error

    Running setup.py install for pyrfr did not run successfully. exit code: 1

    [6 lines of output] C:\Users\16406\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( cl: 命令行 warning D9002 :忽略未知选项“-std=c++11” regression_wrap.cpp C:\Users\16406\AppData\Local\Programs\Python\Python310\include\pyconfig.h(59): fatal error C1083: 无法打开包括文件: “io.h”: No such file or directory error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe' failed with exit code 2 [end of output]

    note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure

    Encountered error while trying to install package.

    pyrfr

    note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure.

    opened by BaihtSYSU 2
  • Is there any way to set an equality constraint?

    Is there any way to set an equality constraint?

    Describe the bug

    import math
    import numpy as np
    from math import prod, fabs
    from openbox import sp
    
    
    def mishra(config: sp.Configuration):
        config_dict = config.get_dictionary()
        ws = np.array([config_dict['w%d' % i] for i in range(3)])
        # if fabs(sum(ws)-1)>1e-6:
        #     return {
        #         "objs":[math.inf,],
        #     }
    
        # if sum(ws) > 1:
        #     return {
        #         "objs": [math.inf, ]
        #     }
    
        result = dict()
        result['objs'] = [-1 * (ws[0] * 2 + ws[1] * 3 + ws[2]), ]
        # ?
        result["constraints"] = [fabs(sum(ws) - 1) - 1e-6, ]
        return result
    
    
    N = 3
    df = 1 / N
    params = {
        'float': {
            f"w{i}": (0, 1, df) for i in range(N)
        }
    }
    space = sp.Space()
    space.add_variables([
        sp.Real(name, *para) for name, para in params['float'].items()
    ])
    
    from openbox import Optimizer
    
    opt = Optimizer(
        mishra,
        space,
        num_objs=1,
        num_constraints=1,
        surrogate_type='gp',
        acq_optimizer_type='random_scipy',
        max_runs=50,
        time_limit_per_trial=10,
        task_id='soc',
    )
    history = opt.run()
    
    print(history)
    
    

    Here is my code, I want the sum of these three parameters to be 1, I tried adding constraints of both 1-sum(ws) and sum(ws)-1 but it doesn't work

    opened by Zeng1998 2
  • Is there some side effects when using ParallelOptimizer?

    Is there some side effects when using ParallelOptimizer?

    At the beginning, I only launch 12 parallel workers. Now for getting result faster,I launch 30 parallel workers. But I find that the result is getting worse in same searching times. Is this as expected?

    By the way, the performance of different workers is different. Some runs very faster, maybe triple faster than the slower workers.

    question 
    opened by yuzizbk 2
  • Docs & Examples for Multi-Fidelity / Early Stopping

    Docs & Examples for Multi-Fidelity / Early Stopping

    E.g., is the resource dimension (e.g., the number of epochs or the number of samples to train) treated as just another hyperparameter? How can one do cross-task transfer learning with multi-fidelity, e.g., how does one report xval accuracy at every epoch for previous trials?

    opened by bbudescu 3
  • Support for conditional (nested, hierarichical) parameter space

    Support for conditional (nested, hierarichical) parameter space

    Does OpenBox support sampling some parameters only when a certain value has been sampled by some other parameter?

    E.g., for a neural net, don't sample layer3_n_filters, layer3_filter_w, layer3_filter_h if we only have 2 layers in the net (e.g., n_layers == 2).

    Or is there another way to address this, e.g., can one just signal that a particular parameter combination is invalid and quit early? Does OpenBox train a separate classification model that tries to predict feasibility for each combination? Should one use unequality constraints? What is the impact on the efficiency of exploring the search space? E.g., does it assign a high cost to unfeasible combinations, and, if so, does this mean it will also assign a high prior cost to combinations on the edge of feasibility?

    In short, how can one treat conditional search spaces?

    documentation 
    opened by bbudescu 4
  • How to disable the function to print information?

    How to disable the function to print information?

    The project is wonderful! It helps me a lot! But I don't know how to disable the function to print information, i.e. image

    I don't want to print the information which makes my results hard to observe. So is there any method to solve it?

    enhancement 
    opened by shengzeang 2
Releases(v0.8.0)
  • v0.8.0(Dec 18, 2022)

    Highlights

    • Add HTML visualization for the optimization process (#48).
      • Provide basic charts for objectives and constraints.
      • Provide advanced functions, including surrogate fitting analysis and hyperparameter importance analysis.
    • Update transfer learning (#54).
      • API change: for transfer learning data, user should provide a List[History] as transfer_learning_history, instead of a OrderedDict[config, perf] as history_bo_data (#54, 4641d7cf).
      • Examples and docs are updated.
    • Refactor History object (0bce5800).
      • Rename HistoryContainer to History.
      • Simplify data structure and provide convenient APIs.
      • Rewrite all methods, including data obtaining, plotting, saving/loading, etc.

    Backwards Incompatible Changes

    • API change: objs are renamed to objectives. num_objs are renamed to num_objectives (ecd5928a).
    • Change objective value of failed trials from MAXINT to np.inf (da88bd24).
    • Drop support for Python 3.6 (end of life on Dec 23, 2021).

    Other Changes

    • Add BlendSearch, LineBO and SafeOpt (experimental) (#40).
    • Add color logger. Provide fine-grained control of logging options (e.g., log level).
    • Rewrite python packaging of the project (#55).
    • Update Markdown parser in docs to myst-parser. recommonmark is deprecated.
    • Add pytest for examples.
    • Use GitHub Actions for CI/CD.

    Bug Fixes

    • Fix error return type of generic advisor and update sampler (Thanks @yezoli) (#44).
    • Consider constraints in plot_convergence (#47).
    Source code(tar.gz)
    Source code(zip)
  • v0.7.18(Nov 14, 2022)

    • Add ConditionedSpace to support complex conditions between hyperparameters (https://github.com/PKU-DAIR/open-box/issues/37).
    • Numerous bug fixes.
    Source code(tar.gz)
    Source code(zip)
Owner
DAIR Lab
Data and Intelligence Research (DAIR) Lab @ Peking University
DAIR Lab
[CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts

Visual-Reasoning-eXplanation [CVPR 2021 A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts] Project Page | Vid

Andy_Ge 54 Dec 21, 2022
Implementation of the "PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences" paper.

PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences Introduction Point cloud sequences are irregular and unordered in the spatial dimen

Hehe Fan 63 Dec 09, 2022
DeepHawkeye is a library to detect unusual patterns in images using features from pretrained neural networks

English | 简体中文 Introduction DeepHawkeye is a library to detect unusual patterns in images using features from pretrained neural networks Reference Pat

CV Newbie 28 Dec 13, 2022
Perspective: Julia for Biologists

Perspective: Julia for Biologists 1. Examples Speed: Example 1 - Single cell data and network inference Domain: Single cell data Methodology: Network

Elisabeth Roesch 55 Dec 02, 2022
NeuTex: Neural Texture Mapping for Volumetric Neural Rendering

NeuTex: Neural Texture Mapping for Volumetric Neural Rendering Paper: https://arxiv.org/abs/2103.00762 Running Run on the provided DTU scene cd run ba

Fanbo Xiang 67 Dec 28, 2022
Towards uncontrained hand-object reconstruction from RGB videos

Towards uncontrained hand-object reconstruction from RGB videos Yana Hasson, Gül Varol, Ivan Laptev and Cordelia Schmid Project page Paper Table of Co

Yana 69 Dec 27, 2022
BisQue is a web-based platform designed to provide researchers with organizational and quantitative analysis tools for 5D image data. Users can extend BisQue by implementing containerized ML workflows.

Overview BisQue is a web-based platform specifically designed to provide researchers with organizational and quantitative analysis tools for up to 5D

Vision Research Lab @ UCSB 26 Nov 29, 2022
This project provides the code and datasets for 'CapSal: Leveraging Captioning to Boost Semantics for Salient Object Detection', CVPR 2019.

Code-and-Dataset-for-CapSal This project provides the code and datasets for 'CapSal: Leveraging Captioning to Boost Semantics for Salient Object Detec

lu zhang 48 Aug 19, 2022
Solving Zero-Shot Learning in Named Entity Recognition with Common Sense Knowledge

Zero-Shot Learning in Named Entity Recognition with Common Sense Knowledge Associated code for the paper Zero-Shot Learning in Named Entity Recognitio

Søren Hougaard Mulvad 13 Dec 25, 2022
Official implementation of the paper ``Unifying Nonlocal Blocks for Neural Networks'' (ICCV'21)

Spectral Nonlocal Block Overview Official implementation of the paper: Unifying Nonlocal Blocks for Neural Networks (ICCV'21) Spectral View of Nonloca

91 Dec 14, 2022
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
CaLiGraph Ontology as a Challenge for Semantic Reasoners ([email protected]'21)

CaLiGraph for Semantic Reasoning Evaluation Challenge This repository contains code and data to use CaLiGraph as a benchmark dataset in the Semantic R

Nico Heist 0 Jun 08, 2022
RGB-D Local Implicit Function for Depth Completion of Transparent Objects

RGB-D Local Implicit Function for Depth Completion of Transparent Objects [Project Page] [Paper] Overview This repository maintains the official imple

NVIDIA Research Projects 43 Dec 12, 2022
MAT: Mask-Aware Transformer for Large Hole Image Inpainting

MAT: Mask-Aware Transformer for Large Hole Image Inpainting (CVPR2022, Oral) Wenbo Li, Zhe Lin, Kun Zhou, Lu Qi, Yi Wang, Jiaya Jia [Paper] News This

254 Dec 29, 2022
An experimental technique for efficiently exploring neural architectures.

SMASH: One-Shot Model Architecture Search through HyperNetworks An experimental technique for efficiently exploring neural architectures. This reposit

Andy Brock 478 Aug 04, 2022
Clustergram - Visualization and diagnostics for cluster analysis in Python

Clustergram Visualization and diagnostics for cluster analysis Clustergram is a diagram proposed by Matthias Schonlau in his paper The clustergram: A

Martin Fleischmann 96 Dec 26, 2022
HDMapNet: A Local Semantic Map Learning and Evaluation Framework

HDMapNet_devkit Devkit for HDMapNet. HDMapNet: A Local Semantic Map Learning and Evaluation Framework Qi Li, Yue Wang, Yilun Wang, Hang Zhao [Paper] [

Tsinghua MARS Lab 421 Jan 04, 2023
SAPIEN Manipulation Skill Benchmark

ManiSkill Benchmark SAPIEN Manipulation Skill Benchmark (abbreviated as ManiSkill, pronounced as "Many Skill") is a large-scale learning-from-demonstr

Hao Su's Lab, UCSD 107 Jan 08, 2023
A lightweight deep network for fast and accurate optical flow estimation.

FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation The official PyTorch implementation of FastFlowNet (ICRA 2021). Authors: Lingtong

Tone 161 Jan 03, 2023
Storchastic is a PyTorch library for stochastic gradient estimation in Deep Learning

Storchastic is a PyTorch library for stochastic gradient estimation in Deep Learning

Emile van Krieken 140 Dec 30, 2022