Nevergrad - A gradient-free optimization platform

Overview

CircleCI

Nevergrad - A gradient-free optimization platform

Nevergrad

nevergrad is a Python 3.6+ library. It can be installed with:

pip install nevergrad

More installation options, including windows installation, and complete instructions are available in the "Getting started" section of the documentation.

You can join Nevergrad users Facebook group here.

Minimizing a function using an optimizer (here NGOpt) is straightforward:

import nevergrad as ng

def square(x):
    return sum((x - .5)**2)

optimizer = ng.optimizers.NGOpt(parametrization=2, budget=100)
recommendation = optimizer.minimize(square)
print(recommendation.value)  # recommended value
>>> [0.49971112 0.5002944]

nevergrad can also support bounded continuous variables as well as discrete variables, and mixture of those. To do this, one can specify the input space:

>> {'learning_rate': 0.1998, 'batch_size': 4, 'architecture': 'conv'} ">
import nevergrad as ng

def fake_training(learning_rate: float, batch_size: int, architecture: str) -> float:
    # optimal for learning_rate=0.2, batch_size=4, architecture="conv"
    return (learning_rate - 0.2)**2 + (batch_size - 4)**2 + (0 if architecture == "conv" else 10)

# Instrumentation class is used for functions with multiple inputs
# (positional and/or keywords)
parametrization = ng.p.Instrumentation(
    # a log-distributed scalar between 0.001 and 1.0
    learning_rate=ng.p.Log(lower=0.001, upper=1.0),
    # an integer from 1 to 12
    batch_size=ng.p.Scalar(lower=1, upper=12).set_integer_casting(),
    # either "conv" or "fc"
    architecture=ng.p.Choice(["conv", "fc"])
)

optimizer = ng.optimizers.NGOpt(parametrization=parametrization, budget=100)
recommendation = optimizer.minimize(fake_training)

# show the recommended keyword arguments of the function
print(recommendation.kwargs)
>>> {'learning_rate': 0.1998, 'batch_size': 4, 'architecture': 'conv'}

Learn more on parametrization in the documentation!

Example of optimization

Convergence of a population of points to the minima with two-points DE.

Documentation

Check out our documentation! It's still a work in progress, don't hesitate to submit issues and/or PR to update it and make it clearer!

Citing

@misc{nevergrad,
    author = {J. Rapin and O. Teytaud},
    title = {{Nevergrad - A gradient-free optimization platform}},
    year = {2018},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://GitHub.com/FacebookResearch/Nevergrad}},
}

License

nevergrad is released under the MIT license. See LICENSE for additional details about it. See also our Terms of Use and Privacy Policy.

Comments
  • Adding IOHexperimenter functions

    Adding IOHexperimenter functions

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    Adds the functions from the PBO-suite of the IOHexperimenter into nevergrad (#338)

    How Has This Been Tested (if it applies)

    Still in progress

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ x] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by Dvermetten 22
  • add support for alternative cmaes implementation

    add support for alternative cmaes implementation

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    Bad performance of cmaes for higher dimensions

    How Has This Been Tested (if it applies)

    Tested by including the new optimization algorithm in several benchmarks and executed them successfully

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [x ] I have read the CONTRIBUTING document and completed the CLA (see CONTRIBUTING).
    • [ ] All tests passed, and additional code has been covered with new tests.

    Purpose of this PR is:

    a) Enable experiments with an alternative CMA implementation - install with "pip install fcmaes".

    b) Exposing popsize as an important CMA parameter

    c) Reading _es.stop() / es.stop to check whether CMA is terminated and kill the rest of the assigned budget in this case. Otherwise time comparison with other algos not able to see if they are stuck is unfair. May be some of them can, then this idea should be generalized.

    General observations:

    • Benchmarks where the optimization algos are configured with workers=1 run fastest if

      a) Benchmark multiprocessing is configured using workers=multiprocessing.cpu_count(). I checked this with a variety of CPUs from Intel + AMD.

      b) The algos use only one thread, which means you have to set
      export "MKL_NUM_THREADS=1" or export "OPENBLAS_NUM_THREADS=1" if numpy is configured to use OPENBLAS. Without doing this CMA, which is heavily based on BLAS, suffers badly - sometimes its factor 10 slower.

      c) Use "export MKL_DEBUG_CPU_TYPE=5" if you are using an AMD CPU, otherwise Intel MKL "ignores" the advanced SIMD instructions of your CPU.

    • popsize is an important CMA parameter which should be benchmarked using different settings. Higher popsize means a broader search which can slow down the optimization for lower budgets but often pays off when the budget is higher.

    • Scaling should not only be viewed as a scalar, often the dimensions should be scaled separately. FCMA does this automatically if bounds are defined, (lb, ub) is mapped on ([-1]*dim, [1]*dim). Nevergrad doesn't provide the optimization algorithms with information about bounds, which may be a bad idea.

    The CMA implementation https://github.com/dietmarwo/fast-cma-es offered as an alternative is much faster with higher dimensions because it better utilizes the underlying BLAS library and avoids loops when possible. There is no "DiagonalOnly" option supported yet, but test have to show if this option makes still sense.

    With all the benchmarks / tests implemented in Nevergrad it should be possible to check:

    • If there are cases were FCMA performs worse than CMA.
    • How the execution times compare.
    • Which are the optimal parameters for FCMA
    • How to reconfigure NGO - if and when to use FCMA with which parameters - to achieve optimal results.

    My tests have shown that the advantage of FCMA is higher if both CMAs are compared standalone or with an efficient parallel retry mechanism. But nevertheless FCMA is significantly faster, specialy with higher dimensions outperforming most of the other algorithms regarding execution time.

    CLA Signed 
    opened by dietmarwo 18
  • NSGA-II implementation

    NSGA-II implementation

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    Related issue: https://github.com/facebookresearch/nevergrad/issues/820 Add the NSGA-II algorithm. The key procedures are the computation of the nondomination rank and crowding distance for the candidates. The initialization, crossover, and mutation procedures are borrowed from DE and ES in the current library.

    How Has This Been Tested

    1. The nondomination rank and crowding distance are tested by applying them on manually created candidates (i.e. p.Parameter variables), whose losses are preset.

    2. The NSGA-II algorithm is tested on a multiobjective function, which is originally used to test CMA.

    3. Test for correct population size as defined in test_optimizerlib.py

    Things to Review

    1. In the current implementation, I'm not quite sure if I use "ask" and "tell" methods correctly.
    2. Not sure why "_internal_tell_not_asked" will happen as in DE.
    3. Is the custom "recommend" method in DE necessary? I found that ES does not implement "recommend".
    4. Do I need to update the "heritage" variable for a candidate?

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by pacowong 17
  • ElectricalMix Simulator using nevergrad

    ElectricalMix Simulator using nevergrad

    pull request to apply for the Nevergrad competition

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    How Has This Been Tested (if it applies)

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by Foloso 17
  • How to handle randomness in optimizer

    How to handle randomness in optimizer

    Hi, I have prevously worked¹ on a gradient-free optimization algorithm called SPSA²-³, and I have Matlab/mex code⁴ that I can port to python easily. I am interested in benchmarking SPSA against other zero-order optimization algorithms using nevergrad.

    I am following the instructions for benchmarking a new optimizer given in adding-your-own-experiments-andor-optimizers-andor-function. My understanding is that I can just add a new SPSA class in nevergrad/optimization/optimizerlib.py and implement

    • __init
    • _internal_ask
    • _internal_provide_recommendation
    • _internal_tell

    functions and then add SPSA to the optims variable in the right experiment function in the nevergrad/benchmark/experiments.py module and then I should be able to generate graphs like docs/resources/noise_r400s12_xpresults_namecigar,rotationTrue.png

    However, SPSA itself uses an rng in the _internal_ask function. But the optimizer base class does not take any seed in the __init__ function. What will be a good way to make the experiments reproducible in such situation?

    [1] Pushpendre Rastogi, Jingyi Zhu, James C. Spall (2016). Efficient implementation of Enhanced Adaptive Simultaneous Perturbation Algorithms. CISS 2016, pdf [2] https://en.wikipedia.org/wiki/Simultaneous_perturbation_stochastic_approximation [3] https://www.chessprogramming.org/SPSA [4] https://github.com/se4u/FASPSA/blob/master/src/SPSA.m

    opened by se4u 16
  • Fix random state for test_noisy_artificial_function_loss

    Fix random state for test_noisy_artificial_function_loss

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [x] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    Related issue: https://github.com/facebookresearch/nevergrad/issues/966 The goal is to resolve the bugs in random_state management for test_noisy_artificial_function_loss which causes reproducibility issues. If you want to patch the current system immediately, I can add back np.random.seed(seed) to the test case.

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by pacowong 15
  • Ability to tune the underlying bayes_opt params

    Ability to tune the underlying bayes_opt params

    Some problems require tuning the underlying bayes_opt params such as the utility function being used or even the underlying gp params ... it seems that there is no way to change them using nevergrad

    opened by robert1826 15
  • Maint: Hypervolume (pyhv.py) module rewrite

    Maint: Hypervolume (pyhv.py) module rewrite

    Types of changes

    • [x] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    This PR is related to the #366.

    This PR rewrites the code that generates the hypervolume indicator, used for multiobjective optimization.

    Any suggestions / comments / requests are very welcome.

    The purpose is two-fold:

    • Rewrite / refactor the existing code to improve the general readability and extendability, spot possible bugs, and add documentation
    • Get rid of the LGPL licence in favour of the standard MIT licence, used by nevergrad.

    How Has This Been Tested (if it applies)

    We added unit tests for the data structure that are used by the HypervolumeIndicator class.

    We test the main functionality of the new hypervolume algorithm against the old one. More integration tests should be included before this PR can be accepted.

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CONTRIBUTING).
    • [x] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by Corwinpro 14
  • Add initial pyomo support

    Add initial pyomo support

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    Pyomo supports the formulation and analysis of mathematical models for complex optimization applications. It provides lots of models for benchmarking. Two Pyomo examples (diet and maxflow) are added in nevergrad/examples/pyomogallery/

    https://github.com/facebookresearch/nevergrad/issues/738

    How Has This Been Tested (if it applies)

    A unit test is prepared in nevergrad/functions/pyomo/test_core.py Pyomo model of a 2-variable minimization problem is defined. There are four test cases currently.

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    Type: Enhancement CLA Signed Priority: Medium Difficulty: High 
    opened by pacowong 13
  • Parallel Ask-and-Tell

    Parallel Ask-and-Tell

    Is it possible to use the ask-and-tell framework in parallel? I can't find examples. I have something as follows (not the complete code):

    def ask_and_tell(optimizer, func):
        x = optimizer.ask()
        loss = func(*x.args)
        optimizer.tell(x, loss)
    
        return x, loss
    
    
    def run_Optimizer(optimizer, func, max_time=None):
        with multiprocessing.Pool(processes=multiprocessing.cpu_count() - 1) as pool:
            all_args = [(optimizer, func) for _ in range(optimizer.budget)]
            results = sorted(pool.starmap_async(ask_and_tell, all_args).get(max_time), key=lambda r: r[1])
            best_x = results[0][0]
            best_loss = results[0][1]
    
        return {"x": best_x, "fun": best_loss}
    

    But I'm getting an error saying a local object can't be pickled. Am I doing it right? Or is the problem somewhere in my code?

    opened by bacalfa 13
  • Performance issue in optimization.utils.Pruning.__call__

    Performance issue in optimization.utils.Pruning.__call__

    Steps to reproduce

    Benchmark used:

    @registry.register
    def illcond_prof(seed: tp.Optional[int] = None) -> tp.Iterator[Experiment]:
        """All optimizers on ill cond problems"""
        seedg = create_seed_generator(seed)
        for budget in [200, 30000]:
            for optim in ["CMA"]:            
                for rotation in [True]:
                    for name in ["cigar"]:
                        function = ArtificialFunction(name=name, rotation=rotation, block_dimension=50)
                        yield Experiment(function, optim, budget=budget, seed=next(seedg))
    
    1. Execute the profiler code below

    2. Modify https://github.com/facebookresearch/nevergrad/blob/294aed2253050a3cc4099b70ca0883588a5582d3/nevergrad/optimization/utils.py#L269-L273 as follows:

    new_archive.bytesdict = {}
    # new_archive.bytesdict = { 
    #     b: v 
    #     for b, v in archive.bytesdict.items() 
    #     if any(v.get_estimation(n) <= quantiles[n] for n in names) 
    
    1. Execute the profiler again and compare the results

    Observed Results

    There is a severe performance issue related to new_archive.bytesdict. If you increase the second budget in 'for budget in [200, 30000]:' the execution time unrelated to the optimizer grows further.

    Expected Results

    • Expected is that execution time is dominated by the cost to execute the optimizer.

    Profiler Code

    import cProfile
    import io
    import pstats
    import nevergrad.benchmark.__main__ as main
    
    if __name__ == '__main__':
        pr = cProfile.Profile()
        pr.enable()
        main.repeated_launch('illcond_prof',num_workers=1)
        pr.disable()
        s = io.StringIO()
        sortby = pstats.SortKey.CUMULATIVE
        ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
        ps.print_stats()
        print(s.getvalue())
    
    opened by dietmarwo 11
  • Update install to large resource class in config.yml

    Update install to large resource class in config.yml

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    How Has This Been Tested (if it applies)

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ ] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by teytaud 0
  • yet another issue in our CI, related to caching

    yet another issue in our CI, related to caching

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    How Has This Been Tested (if it applies)

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ ] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by teytaud 0
  • Adding climate change in Nevergrad/PCSE

    Adding climate change in Nevergrad/PCSE

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    How Has This Been Tested (if it applies)

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ ] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by teytaud 0
  • Adding relative improvement as stopping criterion

    Adding relative improvement as stopping criterion

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and Context / Related issue

    Adding the relative improvement as a stopping criterion callback to the ```base.Optimizer.minimize``` method #589

    How Has This Been Tested (if it applies)

    i took some parts from the script ```test_callbacks.py``` and run some tests with different thresholds, the thing seems to work properly. After i added the test function in an analogous format as the one done for ```test_duration_criterion```

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ ] I have read the CONTRIBUTING document and completed the CLA (see CLA).
    • [x ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by lolloconsoli 7
  • Bounds check error in log scalar

    Bounds check error in log scalar

    Hi, I have an instrumentation that contains a log scalar. Some samples that it generates trigger an out of bounds error when trying to spawn a child off its values.

    Steps to reproduce

    I've contrived this minimum example to highlight the issue: using: python 3.10.4 on linux, nevergrad 0.5.0

    import nevergrad as ng
    
    param = ng.p.Log(init=1e-5, lower=1e-5, upper=1e1) # This does not trigger an out-of-bounds error
    param.spawn_child(1e-5) # neither does this
    print(f"{param.value=}")
    param.spawn_child(param.value) # but this does
    

    Observed Results

    Here's the output & traceback:

    param.value=9.999999999999989e-06 Traceback (most recent call last): File "/home/agimg/projects/fastpbrl/scripts/scratchpads/nevergrad_bounds.py", line 6, in param.spawn_child(param.value) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/core.py", line 348, in spawn_child child.value = new_value File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 190, in set obj._layers[-1]._layered_set_value(value) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 216, in _layered_set_value super()._layered_set_value(np.array([value], dtype=float)) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 96, in _layered_set_value return self._call_deeper("_layered_set_value", value) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 84, in _call_deeper return func(*args, **kwargs) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 227, in _layered_set_value super()._layered_set_value(np.asarray(value)) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 96, in _layered_set_value return self._call_deeper("_layered_set_value", value) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 84, in _call_deeper return func(*args, **kwargs) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_datalayers.py", line 172, in _layered_set_value super()._layered_set_value(self.backward(value)) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 96, in _layered_set_value return self._call_deeper("_layered_set_value", value) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_layering.py", line 84, in _call_deeper return func(*args, **kwargs) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/_datalayers.py", line 279, in _layered_set_value super()._layered_set_value(self._transform.backward(value)) File "/home/agimg/anaconda3/envs/pbrl/lib/python3.10/site-packages/nevergrad/parametrization/transforms.py", line 220, in backward raise ValueError( ValueError: Only data between [-5.] and [1.] can be transformed back. Got: [-5.]

    Expected Results

    This is a contrived example to highlight the issue. In my real code, the 1e-5 isn't an initial parameter set by me, it's randomly produced by a sample() call to the log dist. While this behavior is technically correct because param.value=9.999999999999989e-06 and is indeed out of bounds, it doesn't make sense that the user should have boiler plate code to check that values we pass to initialize a parameter are valid if they're sampled from the same parameter

    Relevant Code

    See steps to reproduce

    opened by llucid-97 0
  • How to perform an equality constraint

    How to perform an equality constraint

    I was wondering how to perform an equality constrain by using nevergrad:

    the objective is defined as:

    def objective(X):
      t1 = np.sum(X*price_list)
      t2 = reg.predict(np.append(feature_mean, [np.sum(X[:10]), np.sum(X[10:])]).reshape(1,-1))
    
      return t1 + t2
    

    where reg is a linear regression function, price_list is array of shape (20,) I want to minimize this objective function.

    let's say I have a array of shape (20,) to optimize, it has lower bound of 0, and upper bound of 100.

    instrum = ng.p.Instrumentation(
        ng.p.Array(shape=(20,)).set_bounds(lower=0, upper=100),
    )
    

    The constraint is:

    optimizer.parametrization.register_cheap_constraint(lambda X: np.sum(X[:10])*chemi_list[0] + np.sum(X[10:])*chemi_list[1] - chemi_to_achieve)`
    

    which means the first 10 elements of array times the first array of chemi_list plus last 10 elements of array times the second array of chemi_list0 minus chemi_to_achieve should equal to 0.

    the all constants arrays look like below:

    price_list = np.array([0.28, 0.27, 0.25, 0.27, 0.28, 0.26, 0.28, 0.28, 0.27, 0.29, 0.32,
           0.28, 0.28, 0.31, 0.32, 0.29, 0.3 , 0.29, 0.33, 0.29])
    
    chemi_list = np.array([[8.200e-02, 5.700e-03, 0.000e+00, 9.000e-05, 3.700e-04, 0.000e+00,
            6.475e-01, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00,
            0.000e+00, 0.000e+00, 0.000e+00],
           [1.400e-03, 7.520e-01, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00,
            0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00,
            1.400e-02, 0.000e+00, 0.000e+00]])
    
    chemi_to_achieve = np.array([15, 3, 0, 3.5, 0, 0, 120, 8, 8, 0, 0, 0, 0, 0, 0])
    

    But I kept getting error

    from concurrent import futures
    
    with futures.ThreadPoolExecutor(max_workers=optimizer.num_workers) as executor:
      recommendation = optimizer.minimize(objective, verbosity=2, executor=executor, batch_mode=False)
    

    TypeError: can only concatenate tuple (not "dict") to tuple

    can anyone tell me how to solve this?

    opened by redcican 1
Releases(v0.5.0)
  • 0.5.0(Mar 8, 2022)

  • 0.4.3(Jan 28, 2021)

    This version provides a few fixes and the new multi-objective API of optimizers (you can now provide a list/array of float to tell directly). This allows fore more efficient multi-objective optimization for some optimizers (DE, NGOpt). Future work will continue to improve multi-objective capacities and aim at improving constraints management.

    See CHANGELOG for details.

    Source code(tar.gz)
    Source code(zip)
  • 0.4.2(Aug 4, 2020)

    This version should be robust. Following versions may become more unstable as we will add more native multiobjective optimization as an experimental feature. We also are in the process of simplifying the naming pattern for the "NGO/Shiwa" type optimizers which may cause some changes in the future.

    See CHANGELOG for details.

    Source code(tar.gz)
    Source code(zip)
  • 0.4.1(May 7, 2020)

  • v0.4.0(Mar 9, 2020)

    This is the final step for creating the new instrumentation/parametrization framework and removing the old one. Learn more on the Facebook user group

    Important changes:

    • the old instrumentation system disappears, all deprecation warnings are removed and are now errors.
    • archive does not store anymore all evaluated points, for memory reasons.

    See CHANGELOG for more details.

    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Feb 5, 2020)

    This is the second step to propagate the new instrumentation/parametrization framework. Learn more on the Facebook user group

    If you are looking for stability, await for version 0.4.0, but the intermediary releases will help by providing deprecation warnings. In particular here are the important changes for this release:

    • Fist argument of optimizers is renamed to parametrization instead of instrumentation for consistency (deprecation warning)
    • Old instrumentation classes now raise deprecation warnings.
    • create_candidate raises deprecation warnings.
    • Candidate class is completely removed, and is completely replaced by Parameter

    See CHANGELOG for more details. All deprecated code will be removed in the following version (v0.4.0)

    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Jan 23, 2020)

    This is the first step to propagate the new instrumentation/parametrization framework. Learn more on the Facebook user group and in the CHANGELOG. If you are looking for stability, await for version 0.4.0, but the intermediary releases will help by providing deprecation warnings.

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jan 8, 2020)

    This release includes new experiment features such as:

    • constraint management
    • multiobjective functions
    • new optimizers This is a stable release before a transition phase in which we will refactor the instrumentation part of the package, allowing a lot more flexibility. These changes will unfortunately probably break some use code, and we expect a few bugs during the transition period.

    See CHANGELOG for more about this release, and checkout Nevergrad users Facebook group for more information about upcoming changes.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Jun 20, 2019)

    This release improves reproducibility by providing a random state to each instrumentation, which is used by the optimizers. It also introduces some namespace changes to make code clearer. See the CHANGELOG for more details.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(May 16, 2019)

  • v0.2.0(Apr 11, 2019)

    This release makes major API changes. Most noticeably:

    • first parameter of optimizers is now instrumentation instead of dimension. This allows the optimizer to have information on the underlying structure. ints are still allowed as before and will set the instrumentation to the Instrumentation(var.Array(n)) (which is basically the identity).
    • ask() and provide_recommendation() now return a Candidate with attributes args, kwargs (depending on the instrumentation) and data (the array which was formerly returned). tell must now receive this candidate as well instead of the array.

    More details can be found in the CHANGELOG and in the documentation.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.6(Mar 15, 2019)

    This fixes a bug in PSO introduced by v0.1.5. This is also the last release with the BaseFunction class which will disappear in favor of InstrumentedFunction (breaking change for custom benchmark functions).

    Source code(tar.gz)
    Source code(zip)
  • v0.1.5(Mar 7, 2019)

  • v0.1.3(Jan 28, 2019)

  • v0.1.2(Jan 25, 2019)

  • v0.1.1(Jan 8, 2019)

ML Kaggle Titanic Problem using LogisticRegrission

-ML-Kaggle-Titanic-Problem-using-LogisticRegrission here you will find the solution for the titanic problem on kaggle with comments and step by step c

Mahmoud Nasser Abdulhamed 3 Oct 23, 2022
Predico Disease Prediction system based on symptoms provided by patient- using Python-Django & Machine Learning

Predico Disease Prediction system based on symptoms provided by patient- using Python-Django & Machine Learning

Felix Daudi 1 Jan 06, 2022
Customers Segmentation with RFM Scores and K-means

Customer Segmentation with RFM Scores and K-means RFM Segmentation table: K-Means Clustering: Business Problem Rule-based customer segmentation machin

5 Aug 10, 2022
Implemented four supervised learning Machine Learning algorithms

Implemented four supervised learning Machine Learning algorithms from an algorithmic family called Classification and Regression Trees (CARTs), details see README_Report.

Teng (Elijah) Xue 0 Jan 31, 2022
Uses WiFi signals :signal_strength: and machine learning to predict where you are

Uses WiFi signals and machine learning (sklearn's RandomForest) to predict where you are. Even works for small distances like 2-10 meters.

Pascal van Kooten 5k Jan 09, 2023
Magenta: Music and Art Generation with Machine Intelligence

Magenta is a research project exploring the role of machine learning in the process of creating art and music. Primarily this involves developing new

Magenta 18.1k Dec 30, 2022
CorrProxies - Optimizing Machine Learning Inference Queries with Correlative Proxy Models

CorrProxies - Optimizing Machine Learning Inference Queries with Correlative Proxy Models

ZhihuiYangCS 8 Jun 07, 2022
Land Cover Classification Random Forest

You can perform Land Cover Classification on Satellite Images using Random Forest and visualize the result using Earthpy package. Make sure to install the required packages and such as

Dr. Sander Ali Khowaja 1 Jan 21, 2022
A statistical library designed to fill the void in Python's time series analysis capabilities, including the equivalent of R's auto.arima function.

pmdarima Pmdarima (originally pyramid-arima, for the anagram of 'py' + 'arima') is a statistical library designed to fill the void in Python's time se

alkaline-ml 1.3k Dec 22, 2022
Napari sklearn decomposition

napari-sklearn-decomposition A simple plugin to use with napari This napari plug

1 Sep 01, 2022
ParaMonte is a serial/parallel library of Monte Carlo routines for sampling mathematical objective functions of arbitrary-dimensions

ParaMonte is a serial/parallel library of Monte Carlo routines for sampling mathematical objective functions of arbitrary-dimensions, in particular, the posterior distributions of Bayesian models in

Computational Data Science Lab 182 Dec 31, 2022
A Software Framework for Neuromorphic Computing

A Software Framework for Neuromorphic Computing

Lava 338 Dec 26, 2022
We have a dataset of user performances. The project is to develop a machine learning model that will predict the salaries of baseball players.

Salary-Prediction-with-Machine-Learning 1. Business Problem Can a machine learning project be implemented to estimate the salaries of baseball players

Ayşe Nur Türkaslan 9 Oct 14, 2022
Module is created to build a spam filter using Python and the multinomial Naive Bayes algorithm.

Naive-Bayes Spam Classificator Module is created to build a spam filter using Python and the multinomial Naive Bayes algorithm. Main goal is to code a

Viktoria Maksymiuk 1 Jun 27, 2022
Iris species predictor app is used to classify iris species created using python's scikit-learn, fastapi, numpy and joblib packages.

Iris Species Predictor Iris species predictor app is used to classify iris species using their sepal length, sepal width, petal length and petal width

Siva Prakash 5 Apr 05, 2022
Implementation of deep learning models for time series in PyTorch.

List of Implementations: Currently, the reimplementation of the DeepAR paper(DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks

Yunkai Zhang 275 Dec 28, 2022
Dieses Projekt ermöglicht es den Smartmeter der EVN (Netz Niederösterreich) über die Kundenschnittstelle auszulesen.

SmartMeterEVN Dieses Projekt ermöglicht es den Smartmeter der EVN (Netz Niederösterreich) über die Kundenschnittstelle auszulesen. Smart Meter werden

greenMike 43 Dec 04, 2022
High performance Python GLMs with all the features!

High performance Python GLMs with all the features!

QuantCo 200 Dec 14, 2022
Arquivos do curso online sobre a estatística voltada para ciência de dados e aprendizado de máquina.

Estatistica para Ciência de Dados e Machine Learning Arquivos do curso online sobre a estatística voltada para ciência de dados e aprendizado de máqui

Renan Barbosa 1 Jan 10, 2022
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.

Ray provides a simple, universal API for building distributed applications. Ray is packaged with the following libraries for accelerating machine lear

23.3k Dec 31, 2022