Nevergrad - A gradient-free optimization platform

Overview

CircleCI

Nevergrad - A gradient-free optimization platform

Nevergrad

nevergrad is a Python 3.6+ library. It can be installed with:

pip install nevergrad

More installation options, including windows installation, and complete instructions are available in the "Getting started" section of the documentation.

You can join Nevergrad users Facebook group here.

Minimizing a function using an optimizer (here NGOpt) is straightforward:

import nevergrad as ng

def square(x):
    return sum((x - .5)**2)

optimizer = ng.optimizers.NGOpt(parametrization=2, budget=100)
recommendation = optimizer.minimize(square)
print(recommendation.value)  # recommended value
>>> [0.49971112 0.5002944]

nevergrad can also support bounded continuous variables as well as discrete variables, and mixture of those. To do this, one can specify the input space:

>> {'learning_rate': 0.1998, 'batch_size': 4, 'architecture': 'conv'} ">
import nevergrad as ng

def fake_training(learning_rate: float, batch_size: int, architecture: str) -> float:
    # optimal for learning_rate=0.2, batch_size=4, architecture="conv"
    return (learning_rate - 0.2)**2 + (batch_size - 4)**2 + (0 if architecture == "conv" else 10)

# Instrumentation class is used for functions with multiple inputs
# (positional and/or keywords)
parametrization = ng.p.Instrumentation(
    # a log-distributed scalar between 0.001 and 1.0
    learning_rate=ng.p.Log(lower=0.001, upper=1.0),
    # an integer from 1 to 12
    batch_size=ng.p.Scalar(lower=1, upper=12).set_integer_casting(),
    # either "conv" or "fc"
    architecture=ng.p.Choice(["conv", "fc"])
)

optimizer = ng.optimizers.NGOpt(parametrization=parametrization, budget=100)
recommendation = optimizer.minimize(fake_training)

# show the recommended keyword arguments of the function
print(recommendation.kwargs)
>>> {'learning_rate': 0.1998, 'batch_size': 4, 'architecture': 'conv'}

Learn more on parametrization in the documentation!

Example of optimization

Convergence of a population of points to the minima with two-points DE.

Documentation

Check out our documentation! It's still a work in progress, don't hesitate to submit issues and/or PR to update it and make it clearer!

Citing

@misc{nevergrad,
    author = {J. Rapin and O. Teytaud},
    title = {{Nevergrad - A gradient-free optimization platform}},
    year = {2018},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://GitHub.com/FacebookResearch/Nevergrad}},
}

License

nevergrad is released under the MIT license. See LICENSE for additional details about it. See also our Terms of Use and Privacy Policy.

Comments
  • Adding IOHexperimenter functions

    Adding IOHexperimenter functions

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    Adds the functions from the PBO-suite of the IOHexperimenter into nevergrad (#338)

    How Has This Been Tested (if it applies)

    Still in progress

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ x] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by Dvermetten 22
  • add support for alternative cmaes implementation

    add support for alternative cmaes implementation

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    Bad performance of cmaes for higher dimensions

    How Has This Been Tested (if it applies)

    Tested by including the new optimization algorithm in several benchmarks and executed them successfully

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [x ] I have read the CONTRIBUTING document and completed the CLA (see CONTRIBUTING).
    • [ ] All tests passed, and additional code has been covered with new tests.

    Purpose of this PR is:

    a) Enable experiments with an alternative CMA implementation - install with "pip install fcmaes".

    b) Exposing popsize as an important CMA parameter

    c) Reading _es.stop() / es.stop to check whether CMA is terminated and kill the rest of the assigned budget in this case. Otherwise time comparison with other algos not able to see if they are stuck is unfair. May be some of them can, then this idea should be generalized.

    General observations:

    • Benchmarks where the optimization algos are configured with workers=1 run fastest if

      a) Benchmark multiprocessing is configured using workers=multiprocessing.cpu_count(). I checked this with a variety of CPUs from Intel + AMD.

      b) The algos use only one thread, which means you have to set
      export "MKL_NUM_THREADS=1" or export "OPENBLAS_NUM_THREADS=1" if numpy is configured to use OPENBLAS. Without doing this CMA, which is heavily based on BLAS, suffers badly - sometimes its factor 10 slower.

      c) Use "export MKL_DEBUG_CPU_TYPE=5" if you are using an AMD CPU, otherwise Intel MKL "ignores" the advanced SIMD instructions of your CPU.

    • popsize is an important CMA parameter which should be benchmarked using different settings. Higher popsize means a broader search which can slow down the optimization for lower budgets but often pays off when the budget is higher.

    • Scaling should not only be viewed as a scalar, often the dimensions should be scaled separately. FCMA does this automatically if bounds are defined, (lb, ub) is mapped on ([-1]*dim, [1]*dim). Nevergrad doesn't provide the optimization algorithms with information about bounds, which may be a bad idea.

    The CMA implementation https://github.com/dietmarwo/fast-cma-es offered as an alternative is much faster with higher dimensions because it better utilizes the underlying BLAS library and avoids loops when possible. There is no "DiagonalOnly" option supported yet, but test have to show if this option makes still sense.

    With all the benchmarks / tests implemented in Nevergrad it should be possible to check:

    • If there are cases were FCMA performs worse than CMA.
    • How the execution times compare.
    • Which are the optimal parameters for FCMA
    • How to reconfigure NGO - if and when to use FCMA with which parameters - to achieve optimal results.

    My tests have shown that the advantage of FCMA is higher if both CMAs are compared standalone or with an efficient parallel retry mechanism. But nevertheless FCMA is significantly faster, specialy with higher dimensions outperforming most of the other algorithms regarding execution time.

    CLA Signed 
    opened by dietmarwo 18
  • NSGA-II implementation

    NSGA-II implementation

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    Related issue: https://github.com/facebookresearch/nevergrad/issues/820 Add the NSGA-II algorithm. The key procedures are the computation of the nondomination rank and crowding distance for the candidates. The initialization, crossover, and mutation procedures are borrowed from DE and ES in the current library.

    How Has This Been Tested

    1. The nondomination rank and crowding distance are tested by applying them on manually created candidates (i.e. p.Parameter variables), whose losses are preset.

    2. The NSGA-II algorithm is tested on a multiobjective function, which is originally used to test CMA.

    3. Test for correct population size as defined in test_optimizerlib.py

    Things to Review

    1. In the current implementation, I'm not quite sure if I use "ask" and "tell" methods correctly.
    2. Not sure why "_internal_tell_not_asked" will happen as in DE.
    3. Is the custom "recommend" method in DE necessary? I found that ES does not implement "recommend".
    4. Do I need to update the "heritage" variable for a candidate?

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by pacowong 17
  • ElectricalMix Simulator using nevergrad

    ElectricalMix Simulator using nevergrad

    pull request to apply for the Nevergrad competition

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    How Has This Been Tested (if it applies)

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by Foloso 17
  • How to handle randomness in optimizer

    How to handle randomness in optimizer

    Hi, I have prevously worked¹ on a gradient-free optimization algorithm called SPSA²-³, and I have Matlab/mex code⁴ that I can port to python easily. I am interested in benchmarking SPSA against other zero-order optimization algorithms using nevergrad.

    I am following the instructions for benchmarking a new optimizer given in adding-your-own-experiments-andor-optimizers-andor-function. My understanding is that I can just add a new SPSA class in nevergrad/optimization/optimizerlib.py and implement

    • __init
    • _internal_ask
    • _internal_provide_recommendation
    • _internal_tell

    functions and then add SPSA to the optims variable in the right experiment function in the nevergrad/benchmark/experiments.py module and then I should be able to generate graphs like docs/resources/noise_r400s12_xpresults_namecigar,rotationTrue.png

    However, SPSA itself uses an rng in the _internal_ask function. But the optimizer base class does not take any seed in the __init__ function. What will be a good way to make the experiments reproducible in such situation?

    [1] Pushpendre Rastogi, Jingyi Zhu, James C. Spall (2016). Efficient implementation of Enhanced Adaptive Simultaneous Perturbation Algorithms. CISS 2016, pdf [2] https://en.wikipedia.org/wiki/Simultaneous_perturbation_stochastic_approximation [3] https://www.chessprogramming.org/SPSA [4] https://github.com/se4u/FASPSA/blob/master/src/SPSA.m

    opened by se4u 16
  • Fix random state for test_noisy_artificial_function_loss

    Fix random state for test_noisy_artificial_function_loss

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [x] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    Related issue: https://github.com/facebookresearch/nevergrad/issues/966 The goal is to resolve the bugs in random_state management for test_noisy_artificial_function_loss which causes reproducibility issues. If you want to patch the current system immediately, I can add back np.random.seed(seed) to the test case.

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by pacowong 15
  • Ability to tune the underlying bayes_opt params

    Ability to tune the underlying bayes_opt params

    Some problems require tuning the underlying bayes_opt params such as the utility function being used or even the underlying gp params ... it seems that there is no way to change them using nevergrad

    opened by robert1826 15
  • Maint: Hypervolume (pyhv.py) module rewrite

    Maint: Hypervolume (pyhv.py) module rewrite

    Types of changes

    • [x] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    This PR is related to the #366.

    This PR rewrites the code that generates the hypervolume indicator, used for multiobjective optimization.

    Any suggestions / comments / requests are very welcome.

    The purpose is two-fold:

    • Rewrite / refactor the existing code to improve the general readability and extendability, spot possible bugs, and add documentation
    • Get rid of the LGPL licence in favour of the standard MIT licence, used by nevergrad.

    How Has This Been Tested (if it applies)

    We added unit tests for the data structure that are used by the HypervolumeIndicator class.

    We test the main functionality of the new hypervolume algorithm against the old one. More integration tests should be included before this PR can be accepted.

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CONTRIBUTING).
    • [x] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by Corwinpro 14
  • Add initial pyomo support

    Add initial pyomo support

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    Pyomo supports the formulation and analysis of mathematical models for complex optimization applications. It provides lots of models for benchmarking. Two Pyomo examples (diet and maxflow) are added in nevergrad/examples/pyomogallery/

    https://github.com/facebookresearch/nevergrad/issues/738

    How Has This Been Tested (if it applies)

    A unit test is prepared in nevergrad/functions/pyomo/test_core.py Pyomo model of a 2-variable minimization problem is defined. There are four test cases currently.

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    Type: Enhancement CLA Signed Priority: Medium Difficulty: High 
    opened by pacowong 13
  • Parallel Ask-and-Tell

    Parallel Ask-and-Tell

    Is it possible to use the ask-and-tell framework in parallel? I can't find examples. I have something as follows (not the complete code):

    def ask_and_tell(optimizer, func):
        x = optimizer.ask()
        loss = func(*x.args)
        optimizer.tell(x, loss)
    
        return x, loss
    
    
    def run_Optimizer(optimizer, func, max_time=None):
        with multiprocessing.Pool(processes=multiprocessing.cpu_count() - 1) as pool:
            all_args = [(optimizer, func) for _ in range(optimizer.budget)]
            results = sorted(pool.starmap_async(ask_and_tell, all_args).get(max_time), key=lambda r: r[1])
            best_x = results[0][0]
            best_loss = results[0][1]
    
        return {"x": best_x, "fun": best_loss}
    

    But I'm getting an error saying a local object can't be pickled. Am I doing it right? Or is the problem somewhere in my code?

    opened by bacalfa 13
  • Performance issue in optimization.utils.Pruning.__call__

    Performance issue in optimization.utils.Pruning.__call__

    Steps to reproduce

    Benchmark used:

    @registry.register
    def illcond_prof(seed: tp.Optional[int] = None) -> tp.Iterator[Experiment]:
        """All optimizers on ill cond problems"""
        seedg = create_seed_generator(seed)
        for budget in [200, 30000]:
            for optim in ["CMA"]:            
                for rotation in [True]:
                    for name in ["cigar"]:
                        function = ArtificialFunction(name=name, rotation=rotation, block_dimension=50)
                        yield Experiment(function, optim, budget=budget, seed=next(seedg))
    
    1. Execute the profiler code below

    2. Modify https://github.com/facebookresearch/nevergrad/blob/294aed2253050a3cc4099b70ca0883588a5582d3/nevergrad/optimization/utils.py#L269-L273 as follows:

    new_archive.bytesdict = {}
    # new_archive.bytesdict = { 
    #     b: v 
    #     for b, v in archive.bytesdict.items() 
    #     if any(v.get_estimation(n) <= quantiles[n] for n in names) 
    
    1. Execute the profiler again and compare the results

    Observed Results

    There is a severe performance issue related to new_archive.bytesdict. If you increase the second budget in 'for budget in [200, 30000]:' the execution time unrelated to the optimizer grows further.

    Expected Results

    • Expected is that execution time is dominated by the cost to execute the optimizer.

    Profiler Code

    import cProfile
    import io
    import pstats
    import nevergrad.benchmark.__main__ as main
    
    if __name__ == '__main__':
        pr = cProfile.Profile()
        pr.enable()
        main.repeated_launch('illcond_prof',num_workers=1)
        pr.disable()
        s = io.StringIO()
        sortby = pstats.SortKey.CUMULATIVE
        ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
        ps.print_stats()
        print(s.getvalue())
    
    opened by dietmarwo 11
  • Update install to large resource class in config.yml

    Update install to large resource class in config.yml

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    How Has This Been Tested (if it applies)

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ ] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by teytaud 0
  • yet another issue in our CI, related to caching

    yet another issue in our CI, related to caching

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    How Has This Been Tested (if it applies)

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ ] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by teytaud 0
  • Adding climate change in Nevergrad/PCSE

    Adding climate change in Nevergrad/PCSE

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    How Has This Been Tested (if it applies)

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ ] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by teytaud 0
  • Adding relative improvement as stopping criterion

    Adding relative improvement as stopping criterion

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    Adding the relative improvement as a stopping criterion callback to the ```base.Optimizer.minimize``` method #589

    How Has This Been Tested (if it applies)

    i took some parts from the script ```test_callbacks.py``` and run some tests with different thresholds, the thing seems to work properly. After i added the test function in an analogous format as the one done for ```test_duration_criterion```

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ ] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [x ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by lolloconsoli 7
  • Bounds check error in log scalar

    Bounds check error in log scalar

    Hi, I have an instrumentation that contains a log scalar. Some samples that it generates trigger an out of bounds error when trying to spawn a child off its values.

    Steps to reproduce

    I've contrived this minimum example to highlight the issue: using: python 3.10.4 on linux, nevergrad 0.5.0

    import nevergrad as ng
    
    param = ng.p.Log(init=1e-5, lower=1e-5, upper=1e1) # This does not trigger an out-of-bounds error
    param.spawn_child(1e-5) # neither does this
    print(f"{param.value=}")
    param.spawn_child(param.value) # but this does
    

    Observed Results

    Here's the output & traceback:

    param.value=9.999999999999989e-06 Traceback (most recent call last): File "/home/agimg/projects/fastpbrl/scripts/scratchpads/nevergrad_bounds.py", line 6, in param.spawn_child(param.value) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/core.py", line 348, in spawn_child child.value = new_value File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 190, in set obj._layers[-1]._layered_set_value(value) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 216, in _layered_set_value super()._layered_set_value(np.array([value], dtype=float)) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 96, in _layered_set_value return self._call_deeper("_layered_set_value", value) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 84, in _call_deeper return func(*args, **kwargs) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 227, in _layered_set_value super()._layered_set_value(np.asarray(value)) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 96, in _layered_set_value return self._call_deeper("_layered_set_value", value) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 84, in _call_deeper return func(*args, **kwargs) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_datalayers.py", line 172, in _layered_set_value super()._layered_set_value(self.backward(value)) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 96, in _layered_set_value return self._call_deeper("_layered_set_value", value) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 84, in _call_deeper return func(*args, **kwargs) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_datalayers.py", line 279, in _layered_set_value super()._layered_set_value(self._transform.backward(value)) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/transforms.py", line 220, in backward raise ValueError( ValueError: Only data between [-5.] and [1.] can be transformed back. Got: [-5.]

    Expected Results

    This is a contrived example to highlight the issue. In my real code, the 1e-5 isn't an initial parameter set by me, it's randomly produced by a sample() call to the log dist. While this behavior is technically correct because param.value=9.999999999999989e-06 and is indeed out of bounds, it doesn't make sense that the user should have boiler plate code to check that values we pass to initialize a parameter are valid if they're sampled from the same parameter

    Relevant Code

    See steps to reproduce

    opened by llucid-97 0
  • How to perform an equality constraint

    How to perform an equality constraint

    I was wondering how to perform an equality constrain by using nevergrad:

    the objective is defined as:

    def objective(X):
      t1 = np.sum(X*price_list)
      t2 = reg.predict(np.append(feature_mean, [np.sum(X[:10]), np.sum(X[10:])]).reshape(1,-1))
    
      return t1 + t2
    

    where reg is a linear regression function, price_list is array of shape (20,) I want to minimize this objective function.

    let's say I have a array of shape (20,) to optimize, it has lower bound of 0, and upper bound of 100.

    instrum = ng.p.Instrumentation(
        ng.p.Array(shape=(20,)).set_bounds(lower=0, upper=100),
    )
    

    The constraint is:

    optimizer.parametrization.register_cheap_constraint(lambda X: np.sum(X[:10])*chemi_list[0] + np.sum(X[10:])*chemi_list[1] - chemi_to_achieve)`
    

    which means the first 10 elements of array times the first array of chemi_list plus last 10 elements of array times the second array of chemi_list0 minus chemi_to_achieve should equal to 0.

    the all constants arrays look like below:

    price_list = np.array([0.28, 0.27, 0.25, 0.27, 0.28, 0.26, 0.28, 0.28, 0.27, 0.29, 0.32,
           0.28, 0.28, 0.31, 0.32, 0.29, 0.3 , 0.29, 0.33, 0.29])
    
    chemi_list = np.array([[8.200e-02, 5.700e-03, 0.000e+00, 9.000e-05, 3.700e-04, 0.000e+00,
            6.475e-01, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00,
            0.000e+00, 0.000e+00, 0.000e+00],
           [1.400e-03, 7.520e-01, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00,
            0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00,
            1.400e-02, 0.000e+00, 0.000e+00]])
    
    chemi_to_achieve = np.array([15, 3, 0, 3.5, 0, 0, 120, 8, 8, 0, 0, 0, 0, 0, 0])
    

    But I kept getting error

    from concurrent import futures
    
    with futures.ThreadPoolExecutor(max_workers=optimizer.num_workers) as executor:
      recommendation = optimizer.minimize(objective, verbosity=2, executor=executor, batch_mode=False)
    

    TypeError: can only concatenate tuple (not "dict") to tuple

    can anyone tell me how to solve this?

    opened by redcican 1
Releases(v0.5.0)
  • 0.5.0(Mar 8, 2022)

  • 0.4.3(Jan 28, 2021)

    This version provides a few fixes and the new multi-objective API of optimizers (you can now provide a list/array of float to tell directly). This allows fore more efficient multi-objective optimization for some optimizers (DE, NGOpt). Future work will continue to improve multi-objective capacities and aim at improving constraints management.

    See CHANGELOG for details.

    Source code(tar.gz)
    Source code(zip)
  • 0.4.2(Aug 4, 2020)

    This version should be robust. Following versions may become more unstable as we will add more native multiobjective optimization as an experimental feature. We also are in the process of simplifying the naming pattern for the "NGO/Shiwa" type optimizers which may cause some changes in the future.

    See CHANGELOG for details.

    Source code(tar.gz)
    Source code(zip)
  • 0.4.1(May 7, 2020)

  • v0.4.0(Mar 9, 2020)

    This is the final step for creating the new instrumentation/parametrization framework and removing the old one. Learn more on the Facebook user group

    Important changes:

    • the old instrumentation system disappears, all deprecation warnings are removed and are now errors.
    • archive does not store anymore all evaluated points, for memory reasons.

    See CHANGELOG for more details.

    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Feb 5, 2020)

    This is the second step to propagate the new instrumentation/parametrization framework. Learn more on the Facebook user group

    If you are looking for stability, await for version 0.4.0, but the intermediary releases will help by providing deprecation warnings. In particular here are the important changes for this release:

    • Fist argument of optimizers is renamed to parametrization instead of instrumentation for consistency (deprecation warning)
    • Old instrumentation classes now raise deprecation warnings.
    • create_candidate raises deprecation warnings.
    • Candidate class is completely removed, and is completely replaced by Parameter

    See CHANGELOG for more details. All deprecated code will be removed in the following version (v0.4.0)

    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Jan 23, 2020)

    This is the first step to propagate the new instrumentation/parametrization framework. Learn more on the Facebook user group and in the CHANGELOG. If you are looking for stability, await for version 0.4.0, but the intermediary releases will help by providing deprecation warnings.

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jan 8, 2020)

    This release includes new experiment features such as:

    • constraint management
    • multiobjective functions
    • new optimizers This is a stable release before a transition phase in which we will refactor the instrumentation part of the package, allowing a lot more flexibility. These changes will unfortunately probably break some use code, and we expect a few bugs during the transition period.

    See CHANGELOG for more about this release, and checkout Nevergrad users Facebook group for more information about upcoming changes.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Jun 20, 2019)

    This release improves reproducibility by providing a random state to each instrumentation, which is used by the optimizers. It also introduces some namespace changes to make code clearer. See the CHANGELOG for more details.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(May 16, 2019)

  • v0.2.0(Apr 11, 2019)

    This release makes major API changes. Most noticeably:

    • first parameter of optimizers is now instrumentation instead of dimension. This allows the optimizer to have information on the underlying structure. ints are still allowed as before and will set the instrumentation to the Instrumentation(var.Array(n)) (which is basically the identity).
    • ask() and provide_recommendation() now return a Candidate with attributes args, kwargs (depending on the instrumentation) and data (the array which was formerly returned). tell must now receive this candidate as well instead of the array.

    More details can be found in the CHANGELOG and in the documentation.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.6(Mar 15, 2019)

    This fixes a bug in PSO introduced by v0.1.5. This is also the last release with the BaseFunction class which will disappear in favor of InstrumentedFunction (breaking change for custom benchmark functions).

    Source code(tar.gz)
    Source code(zip)
  • v0.1.5(Mar 7, 2019)

  • v0.1.3(Jan 28, 2019)

  • v0.1.2(Jan 25, 2019)

  • v0.1.1(Jan 8, 2019)

A Python library for choreographing your machine learning research.

A Python library for choreographing your machine learning research.

AI2 270 Jan 06, 2023
Conducted ANOVA and Logistic regression analysis using matplot library to visualize the result.

Intro-to-Data-Science Conducted ANOVA and Logistic regression analysis. Project ANOVA The main aim of this project is to perform One-Way ANOVA analysi

Chris Yuan 1 Feb 06, 2022
A machine learning project that predicts the price of used cars in the UK

Car Price Prediction Image Credit: AA Cars Project Overview Scraped 3000 used cars data from AA Cars website using Python and BeautifulSoup. Cleaned t

Victor Umunna 7 Oct 13, 2022
AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications.

AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy m

Robin 55 Dec 27, 2022
Combines Bayesian analyses from many datasets.

PosteriorStacker Combines Bayesian analyses from many datasets. Introduction Method Tutorial Output plot and files Introduction Fitting a model to a d

Johannes Buchner 19 Feb 13, 2022
A statistical library designed to fill the void in Python's time series analysis capabilities, including the equivalent of R's auto.arima function.

pmdarima Pmdarima (originally pyramid-arima, for the anagram of 'py' + 'arima') is a statistical library designed to fill the void in Python's time se

alkaline-ml 1.3k Dec 22, 2022
Learn how to responsibly deliver value with ML.

Made With ML Applied ML · MLOps · Production Join 30K+ developers in learning how to responsibly deliver value with ML. 🔥 Among the top MLOps reposit

Goku Mohandas 32k Dec 30, 2022
Markov bot - A Writing bot based on Markov Chain for Data Structure Lab

基于马尔可夫链的写作机器人 前端 用html/css完成 Demo展示(已给出文本的相应展示) 用户提供相关的语料库后训练的成果 后端 要完成的几个接口 解析文

DysprosiumDy 9 May 05, 2022
Distributed Deep learning with Keras & Spark

Elephas: Distributed Deep Learning with Keras & Spark Elephas is an extension of Keras, which allows you to run distributed deep learning models at sc

Max Pumperla 1.6k Dec 29, 2022
Houseprices - Predict sales prices and practice feature engineering, RFs, and gradient boosting

House Prices - Advanced Regression Techniques Predicting House Prices with Machine Learning This project is build to enhance my knowledge about machin

1 Jan 01, 2022
AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications.

AutoTabular AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just

wenqi 2 Jun 26, 2022
MaD GUI is a basis for graphical annotation and computational analysis of time series data.

MaD GUI Machine Learning and Data Analytics Graphical User Interface MaD GUI is a basis for graphical annotation and computational analysis of time se

Machine Learning and Data Analytics Lab FAU 10 Dec 19, 2022
Applied Machine Learning for Graduate Program in Computer Science (PPGCC)

Applied Machine Learning for Graduate Program in Computer Science (PPGCC) - Federal University of Santa Catarina

Jônatas Negri Grandini 1 Dec 22, 2021
Predicting diabetes over a five year period using logistic regression and the Pima First-Nation dataset

Diabetes This script uses the Pima First Nations dataset to create a model to predict whether or not an individual will develop Diabetes Mellitus Type

1 Mar 28, 2022
A benchmark of data-centric tasks from across the machine learning lifecycle.

A benchmark of data-centric tasks from across the machine learning lifecycle.

61 Dec 28, 2022
Client - 🔥 A tool for visualizing and tracking your machine learning experiments

Weights and Biases Use W&B to build better models faster. Track and visualize all the pieces of your machine learning pipeline, from datasets to produ

Weights & Biases 5.2k Jan 03, 2023
A collection of interactive machine-learning experiments: 🏋️models training + 🎨models demo

🤖 Interactive Machine Learning experiments: 🏋️models training + 🎨models demo

Oleksii Trekhleb 1.4k Jan 06, 2023
AtsPy: Automated Time Series Models in Python (by @firmai)

Automated Time Series Models in Python (AtsPy) SSRN Report Easily develop state of the art time series models to forecast univariate data series. Simp

Derek Snow 465 Jan 02, 2023
monolish: MONOlithic Liner equation Solvers for Highly-parallel architecture

monolish is a linear equation solver library that monolithically fuses variable data type, matrix structures, matrix data format, vendor specific data transfer APIs, and vendor specific numerical alg

RICOS Co. Ltd. 179 Dec 21, 2022
A toolkit for geo ML data processing and model evaluation (fork of solaris)

An open source ML toolkit for overhead imagery. This is a beta version of lunular which may continue to develop. Please report any bugs through issues

Ryan Avery 4 Nov 04, 2021