Bayesian optimisation library developped by Huawei Noah's Ark Library

Related tags

Deep LearningHEBO
Overview

Bayesian Optimisation Research

This directory contains official implementations for Bayesian optimisation works developped by Huawei R&D, Noah's Ark Lab.

Further instructions are provided in the README files associated to each project.

HEBO

drawing

Bayesian optimsation library developped by Huawei Noahs Ark Decision Making and Reasoning (DMnR) lab. The winning submission to the NeurIPS 2020 Black-Box Optimisation Challenge.

T-LBO

Codebase associated to: High-Dimensional Bayesian Optimisation withVariational Autoencoders and Deep Metric Learning

Abstract

We introduce a method based on deep metric learning to perform Bayesian optimisation over high-dimensional, structured input spaces using variational autoencoders (VAEs). By extending ideas from supervised deep metric learning, we address a longstanding problem in high-dimensional VAE Bayesian optimisation, namely how to enforce a discriminative latent space as an inductive bias. Importantly, we achieve such an inductive bias using just 1% of the available labelled data relative to previous work, highlighting the sample efficiency of our approach. As a theoretical contribution, we present a proof of vanishing regret for our method. As an empirical contribution, we present state-of-the-art results on real-world high-dimensional black-box optimisation problems including property-guided molecule generation. It is the hope that the results presented in this paper can act as a guiding principle for realising effective high-dimensional Bayesian optimisation.

Bayesian Optimisation with Compositional Optimisers

drawing

Codebase associated to: Are we Forgetting about Compositional Optimisers in Bayesian Optimisation?

Abstract

Bayesian optimisation presents a sample-efficient methodology for global optimisation. Within this framework, a crucial performance-determining subroutine is the maximisation of the acquisition function, a task complicated by the fact that acquisition functions tend to be non-convex and thus nontrivial to optimise. In this paper, we undertake a comprehensive empirical study of approaches to maximise the acquisition function. Additionally, by deriving novel, yet mathematically equivalent, compositional forms for popular acquisition functions, we recast the maximisation task as a compositional optimisation problem, allowing us to benefit from the extensive literature in this field. We highlight the empirical advantages of the compositional approach to acquisition function maximisation across 3958 individual experiments comprising synthetic optimisation tasks as well as tasks from Bayesmark. Given the generality of the acquisition function maximisation subroutine, we posit that the adoption of compositional optimisers has the potential to yield performance improvements across all domains in which Bayesian optimisation is currently being applied.

Codebase Contributors

Alexander I Cowen-Rivers, Antoine Grosnit, Alexandre Max Maravel, Ryan Rhys Griffiths, Wenlong Lyu, Zhi Wang.

Comments
  • ValueError: The value argument must be within the support

    ValueError: The value argument must be within the support

    2021-11-01 08:02:44.622 ERROR Traceback (most recent call last): File "/home/ma-user/work/automl-1.8_EI/vega/core/pipeline/pipeline.py", line 79, in run pipestep.do() File "/home/ma-user/work/automl-1.8_EI/vega/core/pipeline/search_pipe_step.py", line 55, in do self._dispatch_trainer(res) File "/home/ma-user/work/automl-1.8_EI/vega/core/pipeline/search_pipe_step.py", line 73, in _dispatch_trainer self.master.run(trainer, evaluator) File "/home/ma-user/work/automl-1.8_EI/vega/core/scheduler/local_master.py", line 63, in run self._update(step_name, worker_id) File "/home/ma-user/work/automl-1.8_EI/vega/core/scheduler/local_master.py", line 71, in _update self.update_func(step_name, worker_id) File "/home/ma-user/work/automl-1.8_EI/vega/core/pipeline/generator.py", line 131, in update self.search_alg.update(record.serialize()) File "/home/ma-user/work/automl-1.8_EI/vega/algorithms/hpo/hpo_base.py", line 84, in update self.hpo.add_score(config_id, int(rung_id), rewards) File "/home/ma-user/work/automl-1.8_EI/vega/algorithms/hpo/sha_base/boss.py", line 230, in add_score self._set_next_ssa() File "/home/ma-user/work/automl-1.8_EI/vega/algorithms/hpo/sha_base/boss.py", line 159, in _set_next_ssa configs = self.tuner.propose(self.iter_list[iter]) File "/home/ma-user/work/automl-1.8_EI/vega/algorithms/hpo/sha_base/hebo_adaptor.py", line 70, in propose suggestions = self.hebo.suggest(n_suggestions=num) File "/home/ma-user/work/model_zoo/HEBO-master/HEBO/hebo/optimizers/hebo.py", line 126, in suggest rec = opt.optimize(initial_suggest = best_x, fix_input = fix_input).drop_duplicates() File "/home/ma-user/work/model_zoo/HEBO-master/HEBO/hebo/acq_optimizers/evolution_optimizer.py", line 122, in optimize print("optimize: ", prob, algo,self.iter) File "/home/ma-user/miniconda3/envs/MindSpore-python3.7-aarch64/lib/python3.7/site-packages/pymoo/model/problem.py", line 448, in str s += "# f(xl): %s\n" % self.evaluate(self.xl)[0] File "/home/ma-user/miniconda3/envs/MindSpore-python3.7-aarch64/lib/python3.7/site-packages/pymoo/model/problem.py", line 267, in evaluate out = self._evaluate_batch(X, calc_gradient, out, *args, **kwargs) File "/home/ma-user/miniconda3/envs/MindSpore-python3.7-aarch64/lib/python3.7/site-packages/pymoo/model/problem.py", line 335, in _evaluate_batch self._evaluate(X, out, *args, **kwargs) File "/home/ma-user/work/model_zoo/HEBO-master/HEBO/hebo/acq_optimizers/evolution_optimizer.py", line 50, in _evaluate acq_eval = self.acq(xcont, xenum).numpy().reshape(num_x, self.acq.num_obj + self.acq.num_constr) File "/home/ma-user/work/model_zoo/HEBO-master/HEBO/hebo/acquisitions/acq.py", line 39, in call return self.eval(x, xe) File "/home/ma-user/work/model_zoo/HEBO-master/HEBO/hebo/acquisitions/acq.py", line 157, in eval log_phi = dist.log_prob(normed) File "/home/ma-user/miniconda3/envs/MindSpore-python3.7-aarch64/lib/python3.7/site-packages/torch/distributions/normal.py", line 73, in log_prob self._validate_sample(value) File "/home/ma-user/miniconda3/envs/MindSpore-python3.7-aarch64/lib/python3.7/site-packages/torch/distributions/distribution.py", line 277, in _validate_sample raise ValueError('The value argument must be within the support') ValueError: The value argument must be within the support

    opened by hujiaxin0 8
  • ValueError: NaN in distribution

    ValueError: NaN in distribution

    Hi, thanks for this repository! So far it works quite well, but now I suddenly encountered a weird error after 11 optimization steps of non-batched HEBO:

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    /tmp/ipykernel_2773121/4102601230.py in <module>
         35 
         36 for i in range(opt_steps):
    ---> 37     rec = opt.suggest()
         38     if "bs" in rec:
         39         rec["bs"] = 2 ** rec["bs"]
    
    ~/.local/lib/python3.8/site-packages/hebo/optimizers/hebo.py in suggest(self, n_suggestions, fix_input)
        151             sig = Sigma(model, linear_a = -1.)
        152             opt = EvolutionOpt(self.space, acq, pop = 100, iters = 100, verbose = False, es=self.es)
    --> 153             rec = opt.optimize(initial_suggest = best_x, fix_input = fix_input).drop_duplicates()
        154             rec = rec[self.check_unique(rec)]
        155 
    
    ~/.local/lib/python3.8/site-packages/hebo/acq_optimizers/evolution_optimizer.py in optimize(self, initial_suggest, fix_input, return_pop)
        125         crossover = self.get_crossover()
        126         algo      = get_algorithm(self.es, pop_size = self.pop, sampling = init_pop, mutation = mutation, crossover = crossover, repair = self.repair)
    --> 127         res       = minimize(prob, algo, ('n_gen', self.iter), verbose = self.verbose)
        128         if res.X is not None and not return_pop:
        129             opt_x = res.X.reshape(-1, len(lb)).astype(float)
    
    ~/.local/lib/python3.8/site-packages/pymoo/optimize.py in minimize(problem, algorithm, termination, copy_algorithm, copy_termination, **kwargs)
         81 
         82     # actually execute the algorithm
    ---> 83     res = algorithm.run()
         84 
         85     # store the deep copied algorithm in the result object
    
    ~/.local/lib/python3.8/site-packages/pymoo/core/algorithm.py in run(self)
        211         # while termination criterion not fulfilled
        212         while self.has_next():
    --> 213             self.next()
        214 
        215         # create the result object to be returned
    
    ~/.local/lib/python3.8/site-packages/pymoo/core/algorithm.py in next(self)
        231         # call the advance with them after evaluation
        232         if infills is not None:
    --> 233             self.evaluator.eval(self.problem, infills, algorithm=self)
        234             self.advance(infills=infills)
        235 
    
    ~/.local/lib/python3.8/site-packages/pymoo/core/evaluator.py in eval(self, problem, pop, skip_already_evaluated, evaluate_values_of, count_evals, **kwargs)
         93         # actually evaluate all solutions using the function that can be overwritten
         94         if len(I) > 0:
    ---> 95             self._eval(problem, pop[I], evaluate_values_of=evaluate_values_of, **kwargs)
         96 
         97             # set the feasibility attribute if cv exists
    
    ~/.local/lib/python3.8/site-packages/pymoo/core/evaluator.py in _eval(self, problem, pop, evaluate_values_of, **kwargs)
        110         evaluate_values_of = self.evaluate_values_of if evaluate_values_of is None else evaluate_values_of
        111 
    --> 112         out = problem.evaluate(pop.get("X"),
        113                                return_values_of=evaluate_values_of,
        114                                return_as_dictionary=True,
    
    ~/.local/lib/python3.8/site-packages/pymoo/core/problem.py in evaluate(self, X, return_values_of, return_as_dictionary, *args, **kwargs)
        122 
        123         # do the actual evaluation for the given problem - calls in _evaluate method internally
    --> 124         self.do(X, out, *args, **kwargs)
        125 
        126         # make sure the array is 2d before doing the shape check
    
    ~/.local/lib/python3.8/site-packages/pymoo/core/problem.py in do(self, X, out, *args, **kwargs)
        160 
        161     def do(self, X, out, *args, **kwargs):
    --> 162         self._evaluate(X, out, *args, **kwargs)
        163         out_to_2d_ndarray(out)
        164 
    
    ~/.local/lib/python3.8/site-packages/hebo/acq_optimizers/evolution_optimizer.py in _evaluate(self, x, out, *args, **kwargs)
         46 
         47         with torch.no_grad():
    ---> 48             acq_eval = self.acq(xcont, xenum).numpy().reshape(num_x, self.acq.num_obj + self.acq.num_constr)
         49             out['F'] = acq_eval[:, :self.acq.num_obj]
         50 
    
    ~/.local/lib/python3.8/site-packages/hebo/acquisitions/acq.py in __call__(self, x, xe)
         37 
         38     def __call__(self, x : Tensor,  xe : Tensor):
    ---> 39         return self.eval(x, xe)
         40 
         41 class SingleObjectiveAcq(Acquisition):
    
    ~/.local/lib/python3.8/site-packages/hebo/acquisitions/acq.py in eval(self, x, xe)
        155             normed    = ((self.tau - self.eps - py - noise * torch.randn(py.shape)) / ps)
        156             dist      = Normal(0., 1.)
    --> 157             log_phi   = dist.log_prob(normed)
        158             Phi       = dist.cdf(normed)
        159             PI        = Phi
    
    ~/.local/lib/python3.8/site-packages/torch/distributions/normal.py in log_prob(self, value)
         71     def log_prob(self, value):
         72         if self._validate_args:
    ---> 73             self._validate_sample(value)
         74         # compute the variance
         75         var = (self.scale ** 2)
    
    ~/.local/lib/python3.8/site-packages/torch/distributions/distribution.py in _validate_sample(self, value)
        286         valid = support.check(value)
        287         if not valid.all():
    --> 288             raise ValueError(
        289                 "Expected value argument "
        290                 f"({type(value).__name__} of shape {tuple(value.shape)}) "
    
    ValueError: Expected value argument (Tensor of shape (100, 1)) to be within the support (Real()) of the distribution Normal(loc: 0.0, scale: 1.0), but found invalid values:
    tensor([[ -1.1836],
            [ -1.2862],
            [-11.6360],
            [-11.3412],
            [  0.3811],
            [ -2.0235],
            [ -1.7288],
            [ -8.3472],
            [-10.1714],
            [ -2.6084],
            [ -0.8098],
            [ -0.9687],
            [ -9.0626],
            [ -2.2273],
            [ -9.0942],
            [ -1.6956],
            [ -6.6197],
            [ -9.3882],
            [ -6.1594],
            [ -9.2895],
            [ -1.7074],
            [  0.8382],
            [-14.6693],
            [ -0.8303],
            [-10.2741],
            [  0.2808],
            [ -9.3681],
            [ -0.6729],
            [ -2.0288],
            [ -1.4389],
            [ -7.1975],
            [-11.5732],
            [-10.2751],
            [ -1.3800],
            [ -1.9773],
            [ -1.4668],
            [ -9.7166],
            [ -8.3093],
            [-15.5914],
            [ -0.0808],
            [  0.3732],
            [-16.2714],
            [ -2.3120],
            [ -8.7503],
            [ -1.6276],
            [     nan],
            [-15.3692],
            [ -9.1615],
            [ -9.8093],
            [ -2.0716],
            [ -1.9259],
            [  0.9543],
            [ -8.1521],
            [ -2.5709],
            [ -1.6153],
            [-10.7236],
            [ -0.0763],
            [  0.0543],
            [ -7.2755],
            [-10.6411],
            [ -7.9253],
            [-19.4996],
            [ -2.0001],
            [-11.7616],
            [-11.0187],
            [-12.0727],
            [ -1.3243],
            [-11.2528],
            [ -1.5527],
            [ -0.9219],
            [ -1.0130],
            [-10.1825],
            [-18.3420],
            [-11.1005],
            [ -8.5818],
            [-11.1588],
            [ -8.8115],
            [ -1.0410],
            [-15.2722],
            [ -1.8399],
            [ -1.0827],
            [ -1.0277],
            [ -6.4768],
            [ -8.3902],
            [ -0.9513],
            [ -1.3429],
            [ -1.0889],
            [ -7.2952],
            [ -7.8548],
            [ -0.0231],
            [ -7.1898],
            [-20.4194],
            [ -1.2503],
            [-19.6157],
            [ -0.3398],
            [-15.7221],
            [-10.3210],
            [ -9.5764],
            [ -0.2335],
            [ -0.3788]])
    

    Seems like there is a NaN in some distribution of HEBO. But my input parameters (opt.X) and losses (opt.y) are never NaN. This is the design space I'm using:

    space = DesignSpace().parse([{'name': 'lr', 'type' : 'num', 'lb' : 0.00005, 'ub' : 0.1},
                                     {'name': 'n_estimators', 'type' : 'int', 'lb' : 1, 'ub' : 20},  # multiplied by 10
                                     {'name': 'max_depth', 'type' : 'int', 'lb' : 1, 'ub' : 10},
                                     {'name': 'subsample', 'type' : 'num', 'lb' : 0.5, 'ub' : 0.99},
                                     {'name': 'colsample_bytree', 'type' : 'num', 'lb' : 0.5, 'ub' : 0.99},
                                     {'name': 'gamma', 'type' : 'num', 'lb' : 0.01, 'ub' : 10.0},
                                     {'name': 'min_child_weight', 'type' : 'int', 'lb' : 1, 'ub' : 10},
                                     
                                     {'name': 'fill_type', 'type' : 'cat', 'categories' : ['median', 'pat_median','pat_ema']},
                                     {'name': 'flat_block_size', 'type' : 'int', 'lb' : 1, 'ub' : 1}
                                    ])
        
    opt = HEBO(space)
    

    I already commented out flat_block_size as I thought that maybe it is a problem if lb == ub, but it still crashes.

    Any ideas on how I can debug this?

    opened by NotNANtoN 5
  • hebo take up too much cpus

    hebo take up too much cpus

    The main process of HEBO take up 1600% cpus, which makes other parts of the program run very slowly. Is there any way to reduce the cpu usage of hebo? Thanks!

    opened by Shaofei-Li 3
  • 'float' object cannot be interpreted as an integer

    'float' object cannot be interpreted as an integer

    I get the following error TypeError: 'float' object cannot be interpreted as an integer when attempting to search with a XGB model.

    space_cfg = [ {'name' : 'max_depth', 'type' : 'int', 'lb' : 1, 'ub' : 10}, {'name' : 'min_child_weight', 'type' : 'int', 'lb' : 1, 'ub' : 100}, {'name': 'n_estimators', 'type' : 'int', 'lb': 1, 'ub': 10000}, {'name': 'alpha', 'type': 'num', 'lb': 0, 'ub': 100}, {'name': 'lambda', 'type': 'num', 'lb': 0, 'ub': 100}, {'name': 'gamma', 'type': 'num', 'lb': 0, 'ub': 100}, {'name': 'eta', 'type': 'pow', 'lb': 1e-5, 'ub': 1}, {'name': 'colsample_bytree', 'type': 'num', 'lb': 1/3, 'ub': 1}, {'name': 'colsample_bylevel', 'type': 'num', 'lb': 1/3, 'ub': 1}, {'name': 'colsample_bynode', 'type': 'num', 'lb': 1/3, 'ub': 1}, {'name': 'subsample', 'type': 'num', 'lb': 1/27, 'ub': 100}, ]

    opened by replacementAI 3
  • HEBO demo codes fail

    HEBO demo codes fail

    Hi,

    I installed the HEBO library, and tried to run the demo codes here:

    1. https://github.com/huawei-noah/HEBO/tree/master/HEBO#demo
    2. https://github.com/huawei-noah/HEBO/tree/master/HEBO#auto-tuning-via-sklearn-estimator

    Both begin executing but eventually fail with the error: TypeError: __init__() got an unexpected keyword argument 'prob_per_variable'

    The complete stacktrace for the second demo code (auto-tuning sklearn estimator) is:

    Iter 0, best metric: 0.398791  
    Iter 1, best metric: 0.492467
    Iter 2, best metric: 0.658477
    Iter 3, best metric: 0.658477
    Iter 4, best metric: 0.658477
    Iter 5, best metric: 0.658477
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/redacted_path_1/HEBO-master/HEBO/hebo/sklearn_tuner.py", line 74, in sklearn_tuner
        rec     = opt.suggest()
      File "/redacted_path_1/HEBO-master/HEBO/hebo/optimizers/hebo.py", line 153, in suggest
        rec = opt.optimize(initial_suggest = best_x, fix_input = fix_input).drop_duplicates()
      File "/redacted_path_1/HEBO-master/HEBO/hebo/acq_optimizers/evolution_optimizer.py", line 126, in optimize
        algo      = get_algorithm(self.es, pop_size = self.pop, sampling = init_pop, mutation = mutation, crossover = crossover, repair = self.repair)
      File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/factory.py", line 85, in get_algorithm
        return get_from_list(get_algorithm_options(), name, args, {**d, **kwargs})
      File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/factory.py", line 49, in get_algorithm_options
        from pymoo.algorithms.moo.ctaea import CTAEA
      File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/algorithms/moo/ctaea.py", line 223, in <module>
        class CTAEA(GeneticAlgorithm):
      File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/algorithms/moo/ctaea.py", line 230, in CTAEA
        mutation=PM(eta=20, prob_per_variable=None),
      File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/operators/mutation/pm.py", line 77, in __init__
        super().__init__(prob=prob, **kwargs)
      File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/core/mutation.py", line 29, in __init__
        super().__init__(**kwargs)
      File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/core/mutation.py", line 10, in __init__
        super().__init__(**kwargs)
    TypeError: __init__() got an unexpected keyword argument 'prob_per_variable'
    

    Can you provide pointers for fixing this? Thanks!

    opened by abhishek-ghose 2
  • Definition of

    Definition of "compositional" in "compositional optimizers"

    At one point, I may have understood this, but I find myself often wondering and asking again: what does "compositional" in "compositional optimizer" refer to? Part of my confusion probably stems from my materials informatics background, where composition often refers to the chemical make-up (i.e. chemical formula) of a particular compound. In other cases, it just implies that the contribution of individual components sums to one.

    https://github.com/huawei-noah/HEBO/tree/master/CompBO

    opened by sgbaird 2
  • Fix pymoo's ElementwiseProblem call in VCBO

    Fix pymoo's ElementwiseProblem call in VCBO

    Hello. VCBO optimizer call of pymoo's ElementwiseProblem is fixed. New version of pymoo requires that problem should be of type "ElementwiseProblem".

    opened by ajikmr 2
  • An unexpected bug

    An unexpected bug

    When I using hebo.sklearn_tuner to optimize the hyperparameters of xgboost, there is an error that "TypeError: 'float' object cannot be interpreted as an integer", so I made a breakpoint to enter the function to find out the reason. It's because the dataframe class method iloc will transform the data type from int to float, which will cause the error.

    opened by KNwbq 2
  • pymoo has no module 'algorithms.so_genetic_algorithm'

    pymoo has no module 'algorithms.so_genetic_algorithm'

    When running the example code for sklearn tuner, I get the following error message

    ModuleNotFoundError: No module named 'pymoo.algorithms.so_genetic_algorithm'

    The line of code in question is from pymoo.algorithms.so_genetic_algorithm import GA within evolution_optimizer.py

    I can reproduce it standalone as well (i.e. installing pymoo and trying to run that import), and I don't see this algorithm in pymoo's API reference.

    Looks like this comes from a breaking change in pymoo 0.5.0, which they detailed "The package structure has been modified to distinguish between single- and multi-objective optimization more clearly."

    Based on their updated API for version 0.5.0, I believe but am not sure that the genetic algorithm import needs to be changed to:

    from pymoo.algorithms.soo.nonconvex.ga import GA

    opened by nathanwalker-sp 2
  • sklearn_tuner.py example failing with pymoo version 0.5.0

    sklearn_tuner.py example failing with pymoo version 0.5.0

    Using the latest version of pymoo==0.5.0 the sklearn_tuner.py example fails with the error:

    ModuleNotFoundError: No module named 'pymoo.algorithms.so_genetic_algorithm'

    It would be great if we could either make the example compatible with pymoo==0.5.0 or update the requirements and install instructions to specify pymoo==0.4.2 as the required version.

    The example runs with pymoo==0.4.2 albeit breaks with a different error after 5 iterations:

    /Users/~/opt/miniconda3/envs/hebo/bin/python /Users/ryan_rhys/ml_physics/HEBO/HEBO/hebo/sklearn_tuner.py
    Iter 0, best metric: 0.389011
    Iter 1, best metric: 0.496764
    Iter 2, best metric: 0.649803
    Iter 3, best metric: 0.649803
    Iter 4, best metric: 0.649803
    Iter 5, best metric: 0.649803
    Traceback (most recent call last):
      File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/hebo/acq_optimizers/evolution_optimizer.py", line 119, in optimize
        res   = minimize(prob, algo, ('n_gen', self.iter), verbose = self.verbose)
      File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/optimize.py", line 85, in minimize
        res = algorithm.solve()
      File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/algorithm.py", line 226, in solve
        self._solve(self.problem)
      File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/algorithm.py", line 321, in _solve
        self.next()
      File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/algorithm.py", line 246, in next
        self._next()
      File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/algorithms/genetic_algorithm.py", line 93, in _next
        self.off = self.mating.do(self.problem, self.pop, self.n_offsprings, algorithm=self)
      File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/infill.py", line 40, in do
        _off = self.eliminate_duplicates.do(_off, pop, off)
      File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/duplicate.py", line 26, in do
        pop = pop[~self._do(pop, None, np.full(len(pop), False))]
      File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/duplicate.py", line 75, in _do
        D = self.calc_dist(pop, other)
      File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/duplicate.py", line 66, in calc_dist
        D = cdist(X, X)
      File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/util/misc.py", line 90, in cdist
        return scipy.spatial.distance.cdist(A, B, **kwargs)
      File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/scipy/spatial/distance.py", line 2954, in cdist
        return cdist_fn(XA, XB, out=out, **kwargs)
    ValueError: Unsupported dtype object
    
    opened by Ryan-Rhys 2
  • The inverse_transform does not work as expected.

    The inverse_transform does not work as expected.

    Original Codes: https://github.com/huawei-noah/HEBO/blob/f9b7f578bd703ba2f47840a581bb4294faff1e12/HEBO/hebo/design_space/int_exponent_param.py#L33-L34

    As I understand here the function inverse_transform() should worked as taking a arbiraty input(s) as exponent, however we cannot guaranteed the input(s) as Int dtype. In such case, if we have codes as below:

    https://github.com/huawei-noah/HEBO/blob/f9b7f578bd703ba2f47840a581bb4294faff1e12/HEBO/hebo/optimizers/evolution.py#L71-L83

    Here x could be arbitary number, assuming we have x=np.array([5.4]), write some toy codes below to demostrate the problem:

    from hebo.design_space.int_exponent_param import IntExponentPara
    p = IntExponentPara({'name': 'p', 'base': 2, 'lb': 32, 'ub': 256})  # expected parameter values [32, 64, 128, 256]
    x = np.array([5.4])
    print(p.inverse_transform(x))
    # >>> [42]
    # 2**5.4=42.22, after astype(int) is 42, but not in valid values [32, 64, 128, 256]
    

    That is to say, what optimzer.evolution.suggest() produces suggested parameter(s) could violate its definition. To resovled this issue, I open a pull request. #33

    opened by kaimo455 1
  • pymoo 0.6.0 removed many files and functions used in hebo, causing im…

    pymoo 0.6.0 removed many files and functions used in hebo, causing im…

    …possible to use with python > 3.8; this fix moved those files (from pymoo 0.5.0) back to folder pymoo_fix; use torch.linalg.eig since Tensor.eig was removed after pytorch 1.9

    opened by gengjun 2
  • Support for conditional/hierarchical design spaces

    Support for conditional/hierarchical design spaces

    Can one specify a conditional/hierarchical search space to HEBO? I.e., something similar to SMAC3?

    E.g., only sample the number of convoutional filters in the second layer of a neural net only if we decide (by another parameter )to have a network with at least two layers.

    I'm thinking that this can be somewhat circumvented by returning a high cost value for infeasible combinations, but I imagine this might be a suboptimal approach as it might affect optimization performance since, e.g., some optima might lie on the edge of feasible regions, and the cost estimator, if it has some sort of smoothness prior (which they usually do), has a chance of assigning unfaithfully high cost values near the infeasible configurations - at least initially (i.e., the a priori probability of a kink in the error surface is generally lower).

    Some optimizers are able to address this by training a different model - a feasability predictor (which has the advantage of being able to work with unknown feasibility constraints).

    So how should one deal with this in HEBO?

    opened by bbudescu 0
  • Support for fixed parameters

    Support for fixed parameters

    Is it possible to have HEBO search the optimum over a design space where a parameter is defined in a regular fashion (e.g., as a real or integer), but also temporarily (i.e., during a single optimization session) to constrain the search space to a single value of that parameter, and to search only the subspace defined by the remaining unconstrained parameters?

    I.e., something similar to Optuna's PartialFixedSampler

    Why this is useful

    Context

    Generally, it is useful to be able to run several optimization sessions, and use results of trials from previous sessions to improve convergence in the current session. As far as I can understand from the docs, using HEBO, one can run a bunch of trials in one or more sessions and then use observe api calls to feed the previous results to the bayesian model before further exploring using suggest in the current session.

    Scenario no. 1 - Accelerate optimization by reducing the search space

    Now, as explained in the Optuna docs linked above and in the related issue, after running a bunch of trials, one might decide upon a particular value of a parameter, and want to prevent the optimizer from investing more time on exploring values of that parameter are clear not to yield better results, but taking into account the results obtained with other values of that parameter (along the dimensions of other unbound params, the trained cost predictor might still provide valuable insights).

    Scenario no. 2 - Transfer Learning

    If this is implemented, one could do transfer learning, in a fashion similar to the one proposed, e.g., by SMAC3 (they call this feature 'Optimization across Instances') or OpenBox, i.e., to reuse knowledge about high yield sub-regions from one instance (dataset/task) to another.

    Potential Solution

    I'm thinking that one could simply change the bounds of a particular parameter every time a new optimization session is started. However, I'm not quite sure that the observe call will accept values that are out of bounds, and, even if it doesn't crash, that the underlying bayesian model is trained correctly.

    opened by bbudescu 0
  • "High-dimensional optimisation via random linear embedding" example running error

    Hi!When I run this example, I got the following error image

    I noticed that when I set clip true ,the example run successfully,but When clip is the default value of False, the return value MACE_Embedding is not a kind of class MACE ,which lead to above error. image

    opened by Shaofei-Li 0
  • [Quick Fix] Licence in the Setup ?

    [Quick Fix] Licence in the Setup ?

    HI can you add licence info in the setup file ?

    Something like

    setup( .... , 
         classifiers=[
            'License :: OSI Approved :: MIT License',
         ]
    )
    

    This will make licenses flow through PyPi correctly

    opened by sanket-kamthe 0
Releases(v0.3.4)
Owner
HUAWEI Noah's Ark Lab
Working with and contributing to the open source community in data mining, artificial intelligence, and related fields.
HUAWEI Noah's Ark Lab
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning

Automated Side Channel Analysis of Media Software with Manifold Learning Official implementation of USENIX Security 2022 paper: Automated Side Channel

Yuanyuan Yuan 175 Jan 07, 2023
PyTorch implementation of "Contrast to Divide: self-supervised pre-training for learning with noisy labels"

Contrast to Divide: self-supervised pre-training for learning with noisy labels This is an official implementation of "Contrast to Divide: self-superv

55 Nov 23, 2022
A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

Segnet is deep fully convolutional neural network architecture for semantic pixel-wise segmentation. This is implementation of http://arxiv.org/pdf/15

Pradyumna Reddy Chinthala 190 Dec 15, 2022
FluxTraining.jl gives you an endlessly extensible training loop for deep learning

A flexible neural net training library inspired by fast.ai

86 Dec 31, 2022
Code for HodgeNet: Learning Spectral Geometry on Triangle Meshes, in SIGGRAPH 2021.

HodgeNet | Webpage | Paper | Video HodgeNet: Learning Spectral Geometry on Triangle Meshes Dmitriy Smirnov, Justin Solomon SIGGRAPH 2021 Set-up To ins

Dima Smirnov 61 Nov 27, 2022
a simple, efficient, and intuitive text editor

Oxygen beta a simple, efficient, and intuitive text editor Overview oxygen is a simple, efficient, and intuitive text editor designed as more featured

Aarush Gupta 1 Feb 23, 2022
Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

Vision Transformer with Progressive Sampling This is the official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

yuexy 123 Jan 01, 2023
Kernel Point Convolutions

Created by Hugues THOMAS Introduction Update 27/04/2020: New PyTorch implementation available. With SemanticKitti, and Windows supported. This reposit

Hugues THOMAS 584 Jan 07, 2023
DGCNN - Dynamic Graph CNN for Learning on Point Clouds

DGCNN is the author's re-implementation of Dynamic Graph CNN, which achieves state-of-the-art performance on point-cloud-related high-level tasks including category classification, semantic segmentat

Wang, Yue 1.3k Dec 26, 2022
Trajectory Variational Autoencder baseline for Multi-Agent Behavior challenge 2022

MABe_2022_TVAE: a Trajectory Variational Autoencoder baseline for the 2022 Multi-Agent Behavior challenge This repository contains jupyter notebooks t

Andrew Ulmer 15 Nov 08, 2022
A modification of Daniel Russell's notebook merged with Katherine Crowson's hq-skip-net changes

Edits made to this repo by Katherine Crowson I have added several features to this repository for use in creating higher quality generative art (featu

Paul Fishwick 10 May 07, 2022
An open source library for face detection in images. The face detection speed can reach 1000FPS.

libfacedetection This is an open source library for CNN-based face detection in images. The CNN model has been converted to static variables in C sour

Shiqi Yu 11.4k Dec 27, 2022
PyTorch implementation for Graph Contrastive Learning with Augmentations

Graph Contrastive Learning with Augmentations PyTorch implementation for Graph Contrastive Learning with Augmentations [poster] [appendix] Yuning You*

Shen Lab at Texas A&M University 382 Dec 15, 2022
A knowledge base construction engine for richly formatted data

Fonduer is a Python package and framework for building knowledge base construction (KBC) applications from richly formatted data. Note that Fonduer is

HazyResearch 386 Dec 05, 2022
[ACM MM 2021] Multiview Detection with Shadow Transformer (and View-Coherent Data Augmentation)

Multiview Detection with Shadow Transformer (and View-Coherent Data Augmentation) [arXiv] [paper] @inproceedings{hou2021multiview, title={Multiview

Yunzhong Hou 27 Dec 13, 2022
Code for the paper: Sketch Your Own GAN

Sketch Your Own GAN Project | Paper | Youtube Our method takes in one or a few hand-drawn sketches and customizes an off-the-shelf GAN to match the in

677 Dec 28, 2022
Official implementation for (Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation, CVPR-2021)

FRSKD Official implementation for Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation (CVPR-2021) Requirements Pytho

75 Dec 28, 2022
PyTorch implementation of the ExORL: Exploratory Data for Offline Reinforcement Learning

ExORL: Exploratory Data for Offline Reinforcement Learning This is an original PyTorch implementation of the ExORL framework from Don't Change the Alg

Denis Yarats 52 Jan 01, 2023
Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation

Implicit Internal Video Inpainting Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation paper | project

202 Dec 30, 2022