A hyperparameter optimization framework

Overview

Optuna: A hyperparameter optimization framework

Python pypi conda GitHub license CircleCI Read the Docs Codecov Gitter chat

Website | Docs | Install Guide | Tutorial

Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters.

News

Help us create the next version of Optuna!

Optuna 3.0 Roadmap published for review. Please take a look at the planned improvements to Optuna, and share your feedback in the github issues. PR contributions also welcome!

Please take a few minutes to fill in this survey, and let us know how you use Optuna now and what improvements you'd like. 🤔

All questions optional. 🙇‍♂️ https://forms.gle/mCAttqxVg5oUifKV8

Key Features

Optuna has modern functionalities as follows:

Basic Concepts

We use the terms study and trial as follows:

  • Study: optimization based on an objective function
  • Trial: a single execution of the objective function

Please refer to sample code below. The goal of a study is to find out the optimal set of hyperparameter values (e.g., classifier and svm_c) through multiple trials (e.g., n_trials=100). Optuna is a framework designed for the automation and the acceleration of the optimization studies.

Open in Colab

import ...

# Define an objective function to be minimized.
def objective(trial):

    # Invoke suggest methods of a Trial object to generate hyperparameters.
    regressor_name = trial.suggest_categorical('classifier', ['SVR', 'RandomForest'])
    if regressor_name == 'SVR':
        svr_c = trial.suggest_float('svr_c', 1e-10, 1e10, log=True)
        regressor_obj = sklearn.svm.SVR(C=svr_c)
    else:
        rf_max_depth = trial.suggest_int('rf_max_depth', 2, 32)
        regressor_obj = sklearn.ensemble.RandomForestRegressor(max_depth=rf_max_depth)

    X, y = sklearn.datasets.fetch_california_housing(return_X_y=True)
    X_train, X_val, y_train, y_val = sklearn.model_selection.train_test_split(X, y, random_state=0)

    regressor_obj.fit(X_train, y_train)
    y_pred = regressor_obj.predict(X_val)

    error = sklearn.metrics.mean_squared_error(y_val, y_pred)

    return error  # An objective value linked with the Trial object.

study = optuna.create_study()  # Create a new study.
study.optimize(objective, n_trials=100)  # Invoke optimization of the objective function.

Examples

Examples can be found in optuna/optuna-examples.

Integrations

Integrations modules, which allow pruning, or early stopping, of unpromising trials are available for the following libraries:

Web Dashboard (experimental)

The new Web dashboard is under the development at optuna-dashboard. It is still experimental, but much better in many regards. Feature requests and bug reports welcome!

Manage studies Visualize with interactive graphs
manage-studies optuna-realtime-graph

Install optuna-dashboard via pip:

$ pip install optuna-dashboard
$ optuna-dashboard sqlite:///db.sqlite3
...
Listening on http://localhost:8080/
Hit Ctrl-C to quit.

Installation

Optuna is available at the Python Package Index and on Anaconda Cloud.

# PyPI
$ pip install optuna
# Anaconda Cloud
$ conda install -c conda-forge optuna

Optuna supports Python 3.6 or newer.

Also, we also provide Optuna docker images on DockerHub.

Communication

Contribution

Any contributions to Optuna are more than welcome!

If you are new to Optuna, please check the good first issues. They are relatively simple, well-defined and are often good starting points for you to get familiar with the contribution workflow and other developers.

If you already have contributed to Optuna, we recommend the other contribution-welcome issues.

For general guidelines how to contribute to the project, take a look at CONTRIBUTING.md.

Reference

Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A Next-generation Hyperparameter Optimization Framework. In KDD (arXiv).

Comments
  • Add Dask integration

    Add Dask integration

    Motivation

    This PR adds initial native Dask support for Optuna. See the discussion over in https://github.com/optuna/optuna/issues/1766 for additional context.

    Description of the changes

    This PR adds a new DaskStorage class, which is adapted from https://github.com/jrbourbeau/dask-optuna, to Optuna

    Still TODO:

    • [x] Add documentation on DaskStorage
    • [ ] ...

    Closes https://github.com/optuna/optuna/issues/1766

    feature optuna.integration optuna.storages 
    opened by jrbourbeau 58
  • Add QMC sampler

    Add QMC sampler

    Motivation

    As in #1797, Quasi-Monte Carlo (QMC) sampler should be supported as a good alternative to RandomSampler. I want to discuss the implementational details of the QMCSampler as there exist several design choices for ensuring that it works in a distributed environment.

    Description of the changes

    QMCSampler is added at optuna/sampler/_qmc.py. Since scipy will introduce support of QMC in the 1.7.0 release, we use their implementation of QMC to generate several kinds of QMC sequences.

    Design Choices around Distributed Environment

    To suggest QMC samples using distributed workers, we have to synchronize them. This is because QMC sequences are strictly ordered. They cannot be sampled independently; each worker must know exactly how many QMC samples were suggested so far. Since the storage of Optuna does not support atomic transactions, (as far as I understood) we have to compromise somewhere and thus there are some design choices. The possible design I considered are as follows:

    • (Current PR) Use the system attributes of the study to count how many QMC samples have been suggested so far. The advantage of this implementation is that it is simple.
    • In the system attributes of each trial, store the which sample of QMC sample (qmc_id) it is. Then, every time we suggest a new sample, we check all past trials to see how many QMC points have been suggested so far. (Above strategy should be faster than this because they access O(1) information while this access O(n_trials) information in each trial.
    • Set Trial._trial_id or Trial.number equal to qmc_id + const. (This fails when there are different types of samplers running simultaneously.)
    • Set the maximum number of samples of QMC and sample all points at the construction of QMCSampler. In each trial, we randomly pick a point that is not yet sampled from a pool of the pre-sampled points. This strategy is the same as the GridSampler. (This can be inefficient if we fail to sample early points of QMC sequences as the earlier points of the QMC sequences are more important than their later parts. So, it seems better to sample QMC in the ordered manner. )

    TODO:

    • [x] Discuss design choices based on current design
    • [x] Add docstring
    • [x] Add test code
    • [x] Remove unnecessary comments
    feature optuna.samplers no-stale 
    opened by kstoneriv3 51
  • Optuna prunes too aggressively, when objective is jittery (early stopping patience needed)

    Optuna prunes too aggressively, when objective is jittery (early stopping patience needed)

    Optuna Pruners should have a parameter early_stopping_patience (or checks_patience), which defaults to 1. If the objective hasn't improved over the last early_stopping_patience checks, then (early stopping) pruning occurs.

    Motivation

    My objective function is jittery. So Optuna is very aggressive and prunes trials when the objective increases slightly due to jitter.

    This means Optuna is worse than hand tuning, because it prefers only optimization paths that are not jittery and monotonically decrease, which aren't always the ones that converge to the lowest optimum. In some problem settings, it is unusable.

    Description

    What would be better is an optional parameter that says how many checks must occur without improvement, in order for early stopping pruning to happen.

    Optuna Pruners should have a parameter early_stopping_patience (or checks_patience), which defaults to 1. If the objective hasn't improved over the last early_stopping_patience checks, then (early stopping) pruning occurs.

    More specifically, the objective at a particular check for the purposes of pruning, reporting, etc. should be the min (or max) objective observed over the last early_stopping_patience checks.

    Alternatives (optional)

    I have tried interval_steps=10 but it doesn't help. Because if the tenth step is the one that has jitter, it gets pruned.

    Additional context (optional)

    Pytorch early stopping has a patience parameter that is similar to the one I propose.

    feature stale 
    opened by turian 41
  • Update Suggest API

    Update Suggest API

    There is already a function in the Trial class for suggesting a discrete uniformly distributed float (Trial.suggest_discrete_uniform). It would be nice to have the same option that returns integers instead of floats.

    My use case is for network representation learning - I want to suggest the embedding dimension in intervals of 50 between 50-350 (e.g. 50, 100, 150, ... , 300, 350).\

    Currently, I can accomplish this by dividing out my interval then suggesting an integer and multiplying it like:

    embeddings_dim = 50 * trial.suggest_int('embeddings_dim', 1, 7)
    

    But this loses the power of tracking the value within optuna. Alternatively I could use the following, but the the data type is tracked incorrectly.

    embeddings_dim = int(trial.suggest_discrete_uniform('embeddings_dim', 50, 350, q=7))
    
    opened by cthoyt 40
  • Convert all positional arguments to keyword-only

    Convert all positional arguments to keyword-only

    Motivation

    This PR is based on https://github.com/optuna/optuna/issues/2911, but simply changing the order of positional arguments of create_study or load_study can confuse users. So this aims to encourage them to use keyword-only arguments first without any changes to the interface. Once users get used to keyword-only arguments, it gets much easier to align their arguments.

    Description of the changes

    • Change all positional arguments of {create,load,delete,copy}_study() to keyword-only arguments.
    • When a caller of {create,load,delete,copy}_study() sets values as positional arguments, the decorator _convert_positional_args converts them to keyword arguments according to the signature of the decorated function and sets a warning.
    feature optuna.study v3 sprint-20220130 
    opened by higucheese 37
  • Time-based pruning

    Time-based pruning

    Out of curiosity, why isn't there a time-based pruner?

    Justification: Some hyperparameters cause epochs to run much longer than others and while it is true that the objective value might be better per epoch it might still be a losing strategy from a time perspective.

    Does it make sense to compare trials by time instead of by epochs? Meaning: consider a trial better if its objective value is better after the same amount of time (regardless of how many epochs elapsed).

    feature stale 
    opened by cowwoc 34
  • Dask / Optuna Integration

    Dask / Optuna Integration

    Hello Optuna maintainers,

    I work on Dask and I would like to improve integration between Optuna and Dask. I did a brief investigation a couple of days ago (see https://github.com/dask/dask/issues/6571) and now have some questions. Hopefully this is a good place to post questions:

    Nice design

    First, (not actually a question) Optuna's distribution system seems really simple. Thank you for the thoughtful design.

    Storage

    Second, I'm curious about the storage backend. It looks like I can call optuna.Study.optimize with the same function in many places and if all of those places point to the same storage then everything will work smoothly. Great. If the user has a redis or relational database that is globally accessible then I think that there is no more work to do, other than maybe some documentation or examples.

    However, I think that for many data science users setting up a relational database that is visible from every node in a cluster might be difficult. I think that it would be useful to create an equivalent of the local in-memory storage that worked across a Dask cluster. We can do this on the Dask side easily, but I have a couple of questions that might influence the design.

    1. How fast should this be? Is a 5ms response time fast enough, or should this be closer to 500us?
    2. I think that this will receive one query and one insert per trial. Is that correct? Those queries are for the full history of hyperparameters and scores so far?

    Testing

    Also, I'm curious about registering external storages with Optuna. If I make a Dask storage for Optuna I would like to also use the test suite in tests/storages_tests/test_storages.py. We have a few options here:

    1. We make a new dask-optuna package and copy-paste Optuna's test suite into that package
    2. We convert Optuna's test suite into a unittest.TestSuite class that we can then import in the dask-optuna package (this is nicer because dask-optuna tracks changes in optuna).
    3. We build the Dask storage within the Optuna codebase and add the Dask storage to the list of storages to test if Dask is installed

    Some of these changes affect the optuna codebase. Generally, the question here is if you want Dask modules inside github.com/optuna/optuna

    Joblib

    I tried wrapping study.optimize in a joblib.parallel_backend("dask") context manager and was surprised that Dask did not take over from the joblib threading backend. I'm not yet sure why. If anyone has any thoughts here I welcome them.

    Is there anything else?

    Other than Storage, is there anything else that we should consider?

    I do not have practical experience with Optuna. Is this useful?

    Thank you for your time

    question 
    opened by mrocklin 34
  • Move validation logic from `_run_trial` to `study.tell`

    Move validation logic from `_run_trial` to `study.tell`

    🔗 https://github.com/optuna/optuna/issues/3132

    This PR makes the behavior of optuna tell consistent with study.optimize when an objective value is nan or a list containing nan. Inside study.optimize, the case observing nan as an objective values is considered as a special case that Optuna doesn't raise an exception and make trial.state failed and show a log message. For Optuna CLI, I think it would be natural to behave as in study.optimize.

    https://github.com/optuna/optuna/blob/cf33d05c6b6274c0ce24384af528d5fbc7dd0762/optuna/study/_optimize.py#L224-L230

    https://github.com/optuna/optuna/blob/cf33d05c6b6274c0ce24384af528d5fbc7dd0762/optuna/study/_optimize.py#L256-L257

    Motivation

    • Make optuna tell consistent with study.optimize
      • One may think optuna tell should only be consistent with study.tell . Any feedback is highly appreciated
        • If so, we should guide users to run optuna tell ... --state fail when they observe nan
    • When nan is passed to optuna tell, Optuna makes a trial state FAILED

    Description of the changes

    • abf53a655cd16509732000bb70009b633f447898 Add test case
    • 84413f7e87231c1e07098801a0a044be1dd0b40d Invoke _check_and_convert_to_values in _Tell.take_action

    Behavior

    > optuna create-study --storage sqlite:///example.db --study-name test
    [I 2021-12-03 22:09:54,814] A new study created in RDB with name: test
    test
    
    > optuna ask --storage sqlite:///sample.db --study-name test
    /home/himkt/work/github.com/himkt/optuna/optuna/cli.py:736: ExperimentalWarning: 'ask' is an experimental CLI command. The interface can change in the future.
      warnings.warn(
    [I 2021-12-03 22:10:01,697] A new study created in RDB with name: test
    [I 2021-12-03 22:10:01,715] Asked trial 0 with parameters {}.
    {"number": 0, "params": {}}
    
    > optuna tell --study-name test --storage sqlite:///sample.db --trial-number 0 --values nan
    /home/himkt/work/github.com/himkt/optuna/optuna/cli.py:852: ExperimentalWarning: 'tell' is an experimental CLI command. The interface can change in the future.
      warnings.warn(
    [I 2021-12-03 22:10:09,387] Told trial 0 with values None and state TrialState.FAIL.
    [W 2021-12-03 22:10:09,387] Trial 0 failed, because the objective function returned nan.
    
    >>> import optuna
    op>>> study = optuna.load_study(storage="sqlite:///sample.db", study_name="test")
    >>> study.get_trials()
    [FrozenTrial(number=0, values=None, datetime_start=datetime.datetime(2021, 12, 3, 22, 10, 1, 708719), datetime_complete=datetime.datetime(2021, 12, 3, 22, 10, 9, 371719), params={}, distributions={}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=1, state=TrialState.FAIL, value=None)]
    >>>
    
    compatibility optuna.study v3 
    opened by himkt 31
  • Unify the univariate and multivariate TPE

    Unify the univariate and multivariate TPE

    Depends on #2615 and #2616.

    Part of works for #2614.

    Motivation

    The univariate TPE is a special case of the multivariate TPE, but the implementation of them in Optuna is overwrapped now. This PR aims to resolve the redundancy.

    It seems to be difficult to support full backward compatibility including the behavior when the seed is fixed. The reason is that mus, sigmas, and weights in multivariate TPE are arranged in the order of observation, while those in current univariate TPE are arranged in the order of ascending mus. In this PR, we will adapt our logic to the multivariate TPE. We will verify through benchmarking experiments that the performance of the univariate TPE is not significantly impaired by this change.

    Description of the changes

    • Unify the univariate and multivariate TPE
    • Fix some tests

    TODOs

    • [x] Performance benchmark on kurobako

    Benchmark Results

    I took a benchmark between this PR and the current master. In summary, the changes made by this PR do not significantly impair the performance of the algorithm.

    Environments:

    optuna: this PR (https://github.com/optuna/optuna/pull/2618/commits/9d25958f6eee9ab804f01ae8ca2b758e0febbfdd) and the current master (https://github.com/optuna/optuna/commit/be407fdde4533c1df3fe8ec9026e884bc9d4fb15) python: 3.8 kurobako: 0.2.9 algorithms: multivariate-tpe-master-PRUNER, multivariate-tpe-this-PR-PRUNER, tpe-master-PRUNER, tpe-this-PR-PRUNER

    Each algorithm was run 100 times with the same settings, and the mean and variance of the performance were plotted.

    Results

    With NopPruner
    
    

    hpo-bench-naval-ddecd9410805f2a70a5be7c2d5627c3b8b7af26160dbad87f5197a6a11d5f1b3 hpo-bench-parkinson-c54587a492ee2045a50b6b9c5db265781435c0610625a77256bad107e726d964 hpo-bench-protein-71dc1cce554890df75bc2b292694102521136a26a8afd6107e56672a1f9c9e54 hpo-bench-slice-c4296ca79fed7b86790575cb9c5b2dab8426fd1c9b028ea209def49e73d5cb70 nasbench-c-02fe653c3da0cb4ad55ff50834c60ca94a61fc7730a851507264ce322379ed19

    With MedianPruner
    
    

    hpo-bench-naval-ddecd9410805f2a70a5be7c2d5627c3b8b7af26160dbad87f5197a6a11d5f1b3 hpo-bench-parkinson-c54587a492ee2045a50b6b9c5db265781435c0610625a77256bad107e726d964 hpo-bench-protein-71dc1cce554890df75bc2b292694102521136a26a8afd6107e56672a1f9c9e54 hpo-bench-slice-c4296ca79fed7b86790575cb9c5b2dab8426fd1c9b028ea209def49e73d5cb70 nasbench-c-02fe653c3da0cb4ad55ff50834c60ca94a61fc7730a851507264ce322379ed19

    With HyperbandPruner
    
    

    hpo-bench-naval-ddecd9410805f2a70a5be7c2d5627c3b8b7af26160dbad87f5197a6a11d5f1b3 hpo-bench-parkinson-c54587a492ee2045a50b6b9c5db265781435c0610625a77256bad107e726d964 hpo-bench-protein-71dc1cce554890df75bc2b292694102521136a26a8afd6107e56672a1f9c9e54 hpo-bench-slice-c4296ca79fed7b86790575cb9c5b2dab8426fd1c9b028ea209def49e73d5cb70 nasbench-c-02fe653c3da0cb4ad55ff50834c60ca94a61fc7730a851507264ce322379ed19

    compatibility optuna.samplers 
    opened by HideakiImamura 30
  • A proposal of `SobolSampler`

    A proposal of `SobolSampler`

    Motivation

    The use of quasi-random low-discrepancy sequences in place of random sampling is known to perform better [1]. Sobol sequence is one of such sequences and is reported to perform well in [1].

    Description

    I implemented SobolSampler for optuna using a package sobol_seq, which is 1) lightweight and 2) distributed under MIT license. My implementation can be checked at my repository.

    Benchmark

    I compared by my implementation of SobolSampler with existing samplers of Optuna using kurobako. From 100 benchmarks per each sampler and each dataset, SobolSampler was reported (statistically significantly) better than RandomSampler for all datasets.

    hpo-bench-naval-4b38e028c25f65733b05b737c70a0401664a4bab047ab08e1c5e2395ed1e9705 hpo-bench-parkinson-996baeee92a729fabb24c4d9b498bdb1477f2f88f9f85552a0b7981d7501c078 hpo-bench-protein-cbf8512fed27dedfec67a7e80c0921d6ef737a1e902569a7bac60b82e47ceab2 hpo-bench-slice-63956311448c8b8fafcdef5dcba69c838349546715298e00e22cdb8c7a48c655

    TODOs (updated)

    • [x] Discuss how we should support the Sobol sequence generator in Optuna
    • [x] Create PR for SobolSampler (and maybe for Sobol sequence generator)
    • [ ] Benchmark the CMA-ES combined with the Sobol sequences

    References

    [1] Random Search for Hyper-Parameter Optimization

    feature stale 
    opened by kstoneriv3 29
  • Explicit explanation how to write docstring

    Explicit explanation how to write docstring

    As contributors write docstrings for their implementations, it is better to show an explicit example how to write docstrings. e.g. in pytorch reference, it refers Google style docstrings.

    Currently, in optuna, there are no explicit explanation for docstrings in the coding guideliens. Just referring pep8, and it is difficult to know how docstrings should be.

    opened by keisuke-umezawa 29
  • Improve the document of `JournalStorage`

    Improve the document of `JournalStorage`

    Motivation

    JournalStorage sometimes does not work well in a Windows environment and I would like to document the solution.

    Description of the changes

    The content in #4298 was reflected in the document.

    optuna.storages 
    opened by hrntsm 0
  • Remove `OPTUNA_STORAGE` environment variable to check missing storage errors

    Remove `OPTUNA_STORAGE` environment variable to check missing storage errors

    Motivation

    #4299 enabled optuna commands to read storage URLs from the OPTUNA_STORAGE environment variable. So, if OPTUNA_STORAGE exists in os.environ, some test cases in tests/test_cli.py fail. And it updates the users' storage unintendedly.

    Description of the changes

    Remove OPTUNA_STORAGE from the env option of subprocess.check_call and subprocess.check_output to check missing storage errors.

    Note that the current changes seem to be cumbersome, please feel free to correct them.

    test 
    opened by toshihikoyanase 1
  • Fix test_pytorch_lightning.py

    Fix test_pytorch_lightning.py

    Motivation

    Resolve #3418 and #4116

    Description of the changes

    Remove deprecated features

    • trainer.training_type_plugin is deleted since v1.8 (PR#11239) training_type_plugin is just renamed to strategy, so refactored as suggested.

    • An optional argument of trainer, accelerator stopped to accept ddp_cpu. Instead, now they have optional argument strategy and we can pass ddp

    • callback.on_init_start() is deleted since v1.8 (Issue#10894, PR#10940, PR#14867). Currently just commented out this part to avoid failure. As this part only confirms if trainer._accelerator_connector.distributed_backend is properly set up when trainer has it, it would be possible to move this confirmation to somewhere else.

    optuna.integration 
    opened by Alnusjaponica 0
  • Avoid to use features that will be removed in SQLAlchemy v2.0

    Avoid to use features that will be removed in SQLAlchemy v2.0

    Motivation

    CuPy team reported that Optuna depends on some SQLAlchemy's features that will be removed in the SQLAlchemy 2.0 release. https://github.com/cupy/cupy/pull/7276

    Description of the changes

    Fix warnings of sqlalchemy.exc.RemovedIn20Warning. We can detect them like:

    $ SQLALCHEMY_WARN_20=1 pytest tests/storages_tests -W error::sqlalchemy.exc.RemovedIn20Warning
    
    optuna.storages 
    opened by c-bata 2
  • [DO NOT MERGE] Add link to sampler class in the table

    [DO NOT MERGE] Add link to sampler class in the table

    Motivation

    It seems that it will be more convenient for each sampler class name in the table from this document to have links to the corresponding documents.

    Description of the changes

    Added proper link to each sampler class name in the table from this document

    opened by Alnusjaponica 1
Releases(v3.1.0-b0)
  • v3.1.0-b0(Dec 22, 2022)

    This is the release note of v3.1.0-b0.

    Highlights

    CMA-ES with Margin support

    | CMA-ES | CMA-ES with Margin | | ------- | -------- | | CMA-ES | CMA-ESwM |

    “The animation is referred from https://github.com/EvoConJP/CMA-ES_with_Margin, which is distributed under the MIT license.”

    CMA-ES achieves strong performance for continuous optimization, but there is still room for improvement in mixed-integer search spaces. To address this, we have added support for the "CMA-ES with Margin" algorithm to our CmaEsSampler, which makes it more efficient in these cases. You can see the benchmark results here. For more detailed information about CMA-ES with Margin, please refer to the paper “CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization - arXiv”, which has been accepted for presentation at GECCO 2022.

    import optuna
    from optuna.samplers import CmaEsSampler
    
    def objective(trial):
        x = trial.suggest_float("y", -10, 10, step=0.1)
        y = trial.suggest_int("x", -100, 100)
        return x**2 + y
     
    study = optuna.create_study(sampler=CmaEsSampler(with_margin=True))
    study.optimize(objective)
    

    Distributed Optimization via NFS

    JournalFileStorage, a file storage backend based on JournalStorage, supports NFS (Network File System) environments. It is the easiest option for users who wish to execute distributed optimization in environments where it is difficult to set up database servers such as MySQL, PostgreSQL or Redis (e.g. #815, #1330, #1457 and #2216).

    import optuna
    from optuna.storages import JournalStorage, JournalFileStorage
    
    def objective(trial):
        x = trial.suggest_float("x", -100, 100)
        y = trial.suggest_float("y", -100, 100)
        return x**2 + y
     
    storage = JournalStorage(JournalFileStorage("./journal.log"))
    study = optuna.create_study(storage=storage)
    study.optimize(objective)
    

    For more information on JournalFileStorage, see the blog post “Distributed Optimization via NFS Using Optuna’s New Operation-Based Logging Storage” written by @wattlebirdaz.

    Dask Integration

    DaskStorage, a new storage backend based on Dask.distributed, is supported. It enables distributed computing in similar APIs with concurrent.futures. An example code is like the following (The full example code is available in the optuna-examples repository).

    import optuna
    from optuna.storages import InMemoryStorage
    from optuna.integration import DaskStorage
    from distributed import Client, wait
    
    def objective(trial):
        ...
    
    with Client("192.168.1.8:8686") as client:
        study = optuna.create_study(storage=DaskStorage(InMemoryStorage()))
        futures = [
            client.submit(study.optimize, objective, n_trials=10, pure=False)
            for i in range(10)
        ]
        wait(futures)
        print(f"Best params: {study.best_params}")
    

    One of the interesting aspects is the availability of InMemoryStorage. You don’t need to set up database servers for distributed optimization. Although you still need to set up the Dask.distributed cluster, it’s quite easy like the following. See Quickstart of the Dask.distributed documentation for more details.

    $ pip install optuna dask distributed
    
    $ dark-scheduler
    INFO - Scheduler at: tcp://192.168.1.8:8686
    INFO - Dashboard at:                  :8687
    …
    
    $ dask-worker tcp://192.168.1.8:8686
    $ dask-worker tcp://192.168.1.8:8686
    $ dask-worker tcp://192.168.1.8:8686
    
    $ python dask_simple.py
    

    A brand-new Redis storage

    We have replaced the Redis storage backend with a JournalStorage-based one. The experimental RedisStorage class has been removed in v3.1. The following example shows how to use the new JournalRedisStorage class.

    import optuna
    from optuna.storages import JournalStorage, JournalRedisStorage
    
    def objective(trial):
        …
     
    storage = JournalStorage(JournalRedisStorage("redis://localhost:6379"))
    study = optuna.create_study(storage=storage)
    study.optimize(objective)
    

    Sampler for brute-force search

    BruteForceSampler, a new sampler for brute-force search, tries all combinations of parameters. In contrast to GridSampler, it does not require passing the search space as an argument and works even with branches. This sampler constructs the search space with the define-by-run style, so it works by just adding sampler=optuna.samplers.BruteForceSampler().

    import optuna
    
    def objective(trial):
        c = trial.suggest_categorical("c", ["float", "int"])
        if c == "float":
            return trial.suggest_float("x", 1, 3, step=0.5)
        elif c == "int":
            a = trial.suggest_int("a", 1, 3)
            b = trial.suggest_int("b", a, 3)
            return a + b
    
    study = optuna.create_study(sampler=optuna.samplers.BruteForceSampler())
    study.optimize(objective)
    

    Breaking Changes

    • Allow users to call study.optimize() in multiple threads (#4068)
    • Use all trials in TPESampler even when multivariate=True (#4079)
    • Drop Python 3.6 (#4150)
    • Remove RedisStorage (#4156)
    • Deprecate set_system_attr in Study and Trial (#4188)
    • Deprecate system_attrs in Study class (#4250)

    New Features

    • Add Dask integration (#2023, thanks @jrbourbeau!)
    • Add journal-style log storage (#3854)
    • Support CMA-ES with margin in CmaEsSampler (#4016)
    • Add journal redis storage (#4086)
    • Add device argument to BoTorchSampler (#4101)
    • Add the feature to JournalStorage of Redis backend to resume from a snapshot (#4102)
    • Added user_attrs to print by optuna studies in cli.py (#4129, thanks @gonzaload!)
    • Add BruteForceSampler (#4132, thanks @semiexp!)
    • Add __getstate__ and __setstate__ to RedisStorage (#4135, thanks @shu65!)
    • Support pickle in JournalRedisStorage (#4139, thanks @shu65!)
    • Support for qNoisyExpectedHypervolumeImprovement acquisition function from BoTorch (Issue#4014) (#4186)

    Enhancements

    • Change the log message format for failed trials (#3857, thanks @erentknn!)
    • Move default logic of get_trial_id_from_study_id_trial_number() method to BaseStorage (#3910)
    • Fix the data migration script for v3 release (#4020)
    • Convert search_space values of GridSampler explicitly (#4062)
    • Add single exception catch to study optimize (#4098)
    • Add validation in enqueue_trial (#4126)
    • Speed up tests/samplers_tests/test_nsgaii.py::test_fast_non_dominated_sort_with_constraints (#4128, thanks @mist714!)
    • Add getstate and setstate to journal storage (#4130, thanks @shu65!)
    • Support None in slice plot (#4133, thanks @belldandyxtq!)
    • Add marker to matplotlib plot_intermediate_value (#4134, thanks @belldandyxtq!)
    • Cache study.directions to reduce the number of get_study_directions() calls (#4146)
    • Add an in-memory cache in Trial class (#4240)

    Bug Fixes

    • Fix infinite loop bug in TPESampler (#3953, thanks @gasin!)
    • Fix GridSampler (#3957)
    • Fix an import error of sqlalchemy.orm.declarative_base (#3967)
    • Skip to add intermediate_value_type and value_type columns if exists (#4015)
    • Fix duplicated sampling of SkoptSampler (#4023)
    • Avoid parse errors of datetime.isoformat strings (#4025)
    • Fix a concurrency bug of JournalStorage set_trial_state_values (#4033)
    • Specify object type to numpy array init to avoid unintended str cast (#4035)
    • Make TPESampler reproducible (#4056)
    • Fix bugs in constant_liar option (#4073)
    • Add a flush to JournalFileStorage.append_logs (#4076)
    • Add a lock to MLflowCallback (#4097)
    • Reject deprecated distributions in OptunaSearchCV (#4120)
    • Stop using hash function in _get_bracket_id in HyperbandPruner (#4131, thanks @zaburo-ch!)
    • Validation for the parameter enqueued in to_internal_repr of FloatDistribution and IntDistribution (#4137)
    • Fix PartialFixedSampler to handle None correctly (#4147, thanks @halucinor!)
    • Fix the bug of JournalFileStorage on Windows (#4151)
    • Fix CmaEs system attribution key (#4184)

    Installation

    • Replace thop with fvcore (#3906)
    • Use the latest stable scipy (#3959, thanks @gasin!)
    • Remove GPyTorch version constraint (#3986)
    • Make typing_extensions optional (#3990)
    • Add version constraint on importlib-metadata (#4036)
    • Add a version constraint of matplotlib (#4044)

    Documentation

    • Update cli tutorial (#3902)
    • Replace thop with fvcore (#3906)
    • Slightly improve docs of FrozenTrial (#3943)
    • Refine docs in BaseStorage (#3948)
    • Remove "Edit on GitHub" button from readthedocs (#3952)
    • Mention restoring sampler in saving/resuming tutorial (#3992)
    • Use log_loss instead of deprecated log since sklearn 1.1 (#3993)
    • Fix script path in benchmarks/README.md (#4021)
    • Ignore ConvergenceWarning in the ask-and-tell tutorial (#4032)
    • Update docs to let users know the concurrency problem on SQLite3 (#4034)
    • Fix the time complexity of NSGAIISampler (#4045)
    • Fix sampler comparison table (#4082)
    • Add BruteForceSampler in the samplers' list (#4152)
    • Remove markup from NaN in FAQ (#4155)
    • Remove the document of the multi_objective module (#4167)
    • Fix a typo in QMCSampler (#4179)
    • Introduce Optuna Dashboard in tutorial docs (#4226)
    • Remove RedisStorage from docstring (#4232)
    • Add the BruteForceSampler example to the document (#4244)
    • Improve the document of BruteForceSampler (#4245)
    • Fix an inline markup in distributed tutorial (#4247)

    Examples

    • Add Dask example (https://github.com/optuna/optuna-examples/pull/46, thanks @jrbourbeau!)
    • Hotfix for botorch example (https://github.com/optuna/optuna-examples/pull/134)
    • Replace thop with fvcore (https://github.com/optuna/optuna-examples/pull/136)
    • Add Optuna-distributed to external projects (https://github.com/optuna/optuna-examples/pull/137)
    • Remove the version constraint of GPyTorch (https://github.com/optuna/optuna-examples/pull/138)
    • Fix a file path in CONTRIBUTING.md (https://github.com/optuna/optuna-examples/pull/139)
    • Install scikit-learn instead of sklearn (https://github.com/optuna/optuna-examples/pull/141)
    • Add constraint on tensorflow to <2.11.0 (https://github.com/optuna/optuna-examples/pull/146)
    • Specify botorch version (https://github.com/optuna/optuna-examples/pull/151)
    • Pin numpy version to 1.23.x for mxnet examples (https://github.com/optuna/optuna-examples/pull/154)

    Tests

    • Suppress warnings in tests/test_distributions.py (#3912)
    • Suppress warnings and minor code fixes in tests/trial_tests (#3914)
    • Reduce warning messages by tests/study_tests/ (#3915)
    • Remove dynamic search space based objective from a parallel job test (#3916)
    • Remove all warning messages from tests/integration_tests/test_sklearn.py (#3922)
    • Remove out-of-range related warning messages from MLflowCallback and WeightsAndBiasesCallback (#3923)
    • Ignore RuntimeWarning when nanmin and nanmax take an array only containing nan values from pruners_tests (#3924)
    • Remove warning messages from test files for pytorch_distributed and chainermn modules (#3927)
    • Remove warning messages from tests/integration_tests/test_lightgbm.py (#3944)
    • Resolve warnings in tests/visualization_tests/test_contour.py (#3954)
    • Reduced warning messages from tests/visualization_tests/test_slice.py (#3970, thanks @jmsykes83!)
    • Remove warning from a few visualizaiton tests (#3989)
    • Deselect integration tests in Tests CI (#4013)
    • Remove warnings from tests/visualization_tests/test_optimization_history.py (#4024)
    • Unset PYTHONHASHSEED for the hash-depedenet test (#4031)
    • Test: calling study.tell from another process (#4039, thanks @Abelarm!)
    • Improve test for heartbeat: Add test for the case that trial state should be kept running (#4055)
    • Remove warnings in the test of Paretopereto front (#4072)
    • Remove matplotlib get_cmap warning from tests/visualization_tests/test_param_importances.py (#4095)
    • Reduce tests' n_trials for CI time reduction (#4117)
    • Skip test_pop_waiting_trial_thread_safe on RedisStorage (#4119)
    • Simplify the test of BruteForceSampler for infinite search space (#4153)
    • Add sep-CMA-ES in parametrize_sampler (#4154)
    • Fix a broken test for dask.distributed integration (#4170)
    • Add DaskStorage to existing storage tests (#4176, thanks @jrbourbeau!)
    • Fix a test error in test_catboost.py (#4190)
    • Remove test/integration_tests/test_sampler.py (#4204)

    Code Fixes

    • Refactor _tell.py (#3841)
    • Make log message user-friendly when objective returns a sequence of unsupported values (#3868)
    • Gather mask of None parameter in TPESampler (#3886)
    • Update cli tutorial (#3902)
    • Migrate CLI from cliff to argparse (#4100)
    • Enable mypy --no-implicit-reexport option (#4110)
    • Remove unused function: find_any_distribution (#4127)
    • Remove object inheritance from base classes (#4161)
    • Use mlflow 2.0.1 syntax (#4173)
    • Simplify implementation of _preprocess_argv in CLI (#4187)
    • Move _solve_hssp to _hypervolume/utils.py (#4227, thanks @jpbianchi!)
    • Avoid to decode log string in JournalRedisStorage (#4246)

    Continuous Integration

    • Hotfix botorch module by adding the version constraint of gpytorch (#3950)
    • Drop python 3.6 from integration CIs (#3983)
    • Use PyTorch 1.11 for consistency and fix a typo (#3987)
    • Support Python 3.11 (#4018)
    • Remove # type: ignore for mypy 0.981 (#4019)
    • Fix metric inconsistency between bayesmark plots and report (#4077)
    • Pin Ubuntu version to 20.04 in Tests and Tests (Storage with server) (#4118)
    • Add workflow to test Optuna with lower versions of constraints (#4125)
    • Mark some tests slow and ignore in pull request trigger (#4138, thanks @mist714!)
    • Allow display names to be changed in benchmark scripts (Issue #4017) (#4145)
    • Disable scheduled workflow runs in forks (#4159)
    • Remove the CircleCI job document (#4160)
    • Stop running reproducibility tests on CI for PR (#4162)
    • Stop running reproducibility tests for coverage (#4163)
    • Add workflow_dispatch trigger to the integration tests (#4166)
    • [hotfix] Fix CI errors when using mlflow==2.0.1 (#4171)
    • Add fakeredis in benchmark deps (#4177)
    • Fix asv speed benchmark (#4185)
    • Skip tests with minimum version for Python 3.10 and 3.11 (#4199)
    • Split normal tests and tests with minimum versions (#4200)
    • Update action/[email protected] -> v3 (#4206)
    • Update actions/[email protected] -> v6 (#4208)
    • Pin botorch to avoid CI failure (#4228)
    • Add the pytest dependency for asv (#4243)

    Other

    • Bump up version number to 3.1.0.dev (#3934)
    • Remove the news section on README (#3940)
    • Add issue template for code fix (#3968)
    • Close stale issues immediately after labeling stale (#4071)
    • Remove tox.ini (#4078)
    • Replace gitter with GitHub Discussions (#4083)
    • Deprecate description-checked label (#4090)
    • Make days-before-issue-stale 300 days (#4091)
    • Unnecessary space removed (#4109, thanks @gonzaload!)
    • Add note not to share pickle files in bug reports (#4212)
    • Update the description of optuna-dashboard on README (#4217)
    • Remove optuna.TYPE_CHECKING (#4238)
    • Bump up version to v3.1.0-b0 (#4262)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @Abelarm, @Alnusjaponica, @HideakiImamura, @amylase, @belldandyxtq, @c-bata, @contramundum53, @cross32768, @erentknn, @eukaryo, @g-votte, @gasin, @gen740, @gonzaload, @halucinor, @himkt, @hvy, @jmsykes83, @jpbianchi, @jrbourbeau, @keisuke-umezawa, @knshnb, @mist714, @ncclementi, @not522, @nzw0301, @rene-rex, @semiexp, @shu65, @sile, @toshihikoyanase, @wattlebirdaz, @xadrianzetx, @zaburo-ch

    Source code(tar.gz)
    Source code(zip)
  • v3.0.5(Dec 19, 2022)

    This is the release note of v3.0.5.

    Bug Fixes

    • [Backport] Fix bugs in constant_liar option (#4257)

    Other

    • Bump up version number to 3.0.5 (#4256)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @HideakiImamura, @eukaryo, @toshihikoyanase

    Source code(tar.gz)
    Source code(zip)
  • v3.0.4(Dec 1, 2022)

    This is the release note of v3.0.4.

    Bug Fixes

    • [Backport] Specify object type to numpy array init to avoid unintended str cast (#4218)

    Other

    • Bump up version to v3.0.4 (#4214)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @HideakiImamura, @contramundum53

    Source code(tar.gz)
    Source code(zip)
  • v3.0.3(Oct 11, 2022)

    This is the release note of v3.0.3.

    Enhancements

    • [Backport] Fix the data migration script for v3 release (#4053)

    Bug Fixes

    • [Backport] Skip to add intermediate_value_type and value_type columns if exists (#4052)

    Installation

    • Backport #4036 and #4044 to pass tests on release-v3.0.3 branch (#4043)

    Other

    • Bump up version to v3.0.3 (#4041)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @c-bata, @contramundum53

    Source code(tar.gz)
    Source code(zip)
  • v3.0.2(Sep 15, 2022)

    This is the release note of v3.0.2.

    Highlights

    Bug fix for DB migration with SQLAlchemy v1.3

    In v3.0.0 or v3.0.1, DB migration fails with SQLAlchemy v1.3. We fixed this issue in v3.0.2.

    Removing typing-extensions from dependency

    In v3.0.0, typing-extensions was used for fine-grained type checking. However, that resulted in import failures when using older versions of typing-extensions. We made the dependency optional in v3.0.2.

    Bug Fixes

    • [Backport] Merge pull request #3967 from c-bata/fix-issue-3966 (#4004)

    Installation

    • [Backport] Merge pull request #3990 from c-bata/make-typing-extensions-optional (#4005)

    Others

    • Bump up version number to v3.0.2 (#3991)

    Thanks to All the Contributors!

    @contramundum53, @c-bata

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    Source code(tar.gz)
    Source code(zip)
  • v3.0.1(Sep 8, 2022)

    This is the release note of v3.0.1.

    Highlights

    Bug fix for GridSampler with RDB

    In v3.0.0, GridSampler with RDB raises an error. This patch fixes this combination.

    Bug Fixes

    • Backport #3957 (#3972)

    Others

    • Bump up version number to v3.0.1 (#3973)

    Thanks to All the Contributors!

    @HideakiImamura, @contramundum53, @not522

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0(Aug 29, 2022)

    This is the release note of v3.0.0.

    This is not something you have to read from top to bottom to learn about the summary of Optuna v3. The recommended way is reading the release blog.

    If you want to update your existing projects from Optuna v2.x to Optuna v3, please see the migration guide and try out Optuna v3.

    Highlights

    New Features

    New NSGA-II Crossover Options

    New crossover options are added to NSGA-II sampler, the default multi-objective algorithm of Optuna. The performance for floating point parameters are improved. Please visit #2903, #3221, and the document for more information.

    0_1lz-Qe5runcsBu2G

    A New Algorithm: Quasi-Monte Carlo Sampler

    Quasi-Monte Carlo sampler is now supported. It can be used in place of RandomSampler, and can improve performance especially for high dimensional problems. See #2423, #2964, and the document for more information.

    0_hPjrGjWiiqxLFqTo

    Constrained Optimization Support for TPE

    TPESampler now supports constraint-aware optimization. For more information on this feature, please visit #3506 and the document.

    | Without constraints | With constraints | | - | - | | 0_UKce2qHiHU0UeE-R | 0_z_7rOHmGZJgsF9SW |

    Constraints Support for Pareto-front Plot

    Pareto-front plot now shows which trials satisfy the constraints and which do not. For more information, please see the following PRs (#3128, #3497, and #3389) and the document.

    0_VltxJOo61X6j15Or

    A New Importance Evaluator: ShapleyImportanceEvaluator

    We introduced a new importance evaluator, optuna.integration.ShapleyImportanceEvaluator, which uses SHAP. See #3507 and the document for more information.

    0_U8t6iTTkvB8P-6Wl

    New History Visualization with Multiple Studies

    Optimization history plot can now compare multiple studies or display the mean and variance of multiple studies optimized with the same settings. For more information, please see the following multiple PRs (#2807, #3062, #3122, and #3736) and the document.

    0__914czONcx1ltqnw 0_UnLQXw4wvBVIxSrV

    Improved Stability

    Optuna has a number of core APIs. One being the suggest API and the optuna.Study class. The visualization module is also frequently used to analyze results. Many of these have been simplified, stabilized, and refactored in v3.0.

    Simplified Suggest API

    The suggest API has been aggregated into 3 APIs: suggest_float for floating point parameters, suggest_int for integer parameters, and suggest_catagorical for categorical parameters. For more information, see #2939, #2941, and PRs submitted for those issues.

    Introduction of a Test Policy

    We have developed and published a test policy in v3.0 that defines how tests for Optuna should be written. Based on the published test policy, we have improved many unit tests. For more information, see https://github.com/optuna/optuna/issues/2974 and PRs with test label.

    Visualization Refactoring

    Optuna's visualization module had a deep history and various debts. We have worked throughout v3.0 to eliminate this debt with the help of many contributors. See #2893, #2913, #2959 and PRs submitted for those issues.

    Stabilized Features

    Through the development of v3.0, we have decided to provide many experimental features as stable features by going through their behavior, fixing bugs, and analyzing use cases. The following is a list of features that have been stabilized in v3.0.

    Performance Verification

    Optuna has many algorithms implemented, but many of their behaviors and characteristics are unknown to the user. We have developed the following table to inform users of empirically known behaviors and characteristics. See #3571 and #3593 for more details.

    0_I07iGBkmrcdZzvQA

    To quantitatively assess the performance of our algorithms, we have developed a benchmarking environment. We also evaluated the performance of the algorithms by conducting actual benchmarking experiments using this environment. See here, #2964, and #2906 for more details.

    0_MVlFm-lmvTyVVx78

    Breaking Changes

    Changes to the RDB schema:

    • To use Optuna v3.0.0 with RDBStorage that was created in the previous versions of Optuna, please execute optuna storage upgrade to migrate your database (#3113, #3559, #3603, #3668).

    Features deprecated in 3.0:

    • suggest_uniform(), suggest_loguniform(), and suggest_discrete_uniform() UniformDistribution, LogUniformDistribution, DiscreteUniformDistribution, IntUniformDistribution, and IntLogUniformDistribution (#3246, #3420)
    • Positional arguments of create_study(), load_study(), delete_study(), and create_study() (#3270)
    • axis_order argument of plot_pareto_front() (#3341)

    Features removed in 3.0:

    • optuna dashboard command (#3058)
    • optuna.structs module (#3057)
    • best_booster property of LightGBMTuner (#3057)
    • type_checking module (#3235)

    Minor breaking changes:

    • Add option to exclude best trials from study summaries (#3109)
    • Move validation logic from _run_trial to study.tell (#3144)
    • Use an enqueued parameter that is out of range from suggest API (#3298)
    • Fix distribution compatibility for linear and logarithmic distribution (#3444)
    • Remove get_study_id_from_trial_id, the method of BaseStorage (#3538)

    New Features

    • Add interval for LightGBM callback (#2490)
    • Allow multiple studies and add error bar option to plot_optimization_history (#2807)
    • Support PyTorch-lightning DDP training (#2849, thanks @tohmae!)
    • Add crossover operators for NSGA-II (#2903, thanks @yoshinobc!)
    • Add abbreviated JSON formats of distributions (#2905)
    • Extend MLflowCallback interface (#2912, thanks @xadrianzetx!)
    • Support AllenNLP distributed pruning (#2977)
    • Make trial.user_attrs logging optional in MLflowCallback (#3043, thanks @xadrianzetx!)
    • Support multiple input of studies when plot with Matplotlib (#3062, thanks @TakuyaInoue-github!)
    • Add IntDistribution & FloatDistribution (#3063, thanks @nyanhi!)
    • Add trial.user_attrs to pareto_front hover text (#3082, thanks @kasparthommen!)
    • Support error bar for Matplotlib (#3122, thanks @TakuyaInoue-github!)
    • Add optuna tell with --skip-if-finished (#3131)
    • Add QMC sampler (#2423, thanks @kstoneriv3!)
    • Refactor pareto front and support constraints_func in plot_pareto_front (#3128, thanks @semiexp!)
    • Add skip_if_finished flag to Study.tell (#3150, thanks @xadrianzetx!)
    • Add user_attrs argument to Study.enqueue_trial (#3185, thanks @knshnb!)
    • Option to inherit intermediate values in RetryFailedTrialCallback (#3269, thanks @knshnb!)
    • Add setter method for DiscreteUniformDistribution.q (#3283)
    • Stabilize allennlp integrations (#3228)
    • Stabilize create_trial (#3196)
    • Add CatBoostPruningCallback (#2734, thanks @tohmae!)
    • Create common API for all NSGA-II crossover operations (#3221)
    • Add a history of retried trial numbers in Trial.system_attrs (#3223, thanks @belltailjp!)
    • Convert all positional arguments to keyword-only (#3270, thanks @higucheese!)
    • Stabilize study.py (#3309)
    • Add targets and deprecate axis_order in optuna.visualization.matplotlib.plot_pareto_front (#3341, thanks @shu65!)
    • Add targets argument to plot_pareto_plont of plotly backend (#3495, thanks @TakuyaInoue-github!)
    • Support constraints_func in plot_pareto_front in matplotlib visualization (#3497, thanks @fukatani!)
    • Calculate the feature importance with mean absolute SHAP values (#3507, thanks @liaison!)
    • Make GridSampler reproducible (#3527, thanks @gasin!)
    • Replace ValueError with warning in GridSearchSampler (#3545)
    • Implement callbacks argument of OptunaSearchCV (#3577)
    • Add option to skip table creation to RDBStorage (#3581)
    • Add constraints option to TPESampler (#3506)
    • Add skip_if_exists argument to enqueue_trial (#3629)
    • Remove experimental from plot_pareto_front (#3643)
    • Add popsize argument to CmaEsSampler (#3649)
    • Add seed argument for BoTorchSampler (#3756)
    • Add seed argument for SkoptSampler (#3791)
    • Revert AllenNLP integration back to experimental (#3822)
    • Remove abstractmethod decorator from get_trial_id_from_study_id_trial_number (#3909)

    Enhancements

    • Add single distribution support to BoTorchSampler (#2928)
    • Speed up import optuna (#3000)
    • Fix _contains of IntLogUniformDistribution (#3005)
    • Render importance scores next to bars in matplotlib.plot_param_importances (#3012, thanks @xadrianzetx!)
    • Make default value of verbose_eval NoneN forLightGBMTuner/LightGBMTunerCV` to avoid conflict (#3014, thanks @chezou!)
    • Unify colormap of plot_contour (#3017)
    • Relax FixedTrial and FrozenTrial allowing not-contained parameters during suggest_* (#3018)
    • Raise errors if optuna ask CLI receives --sampler-kwargs without --sampler (#3029)
    • Remove _get_removed_version_from_deprecated_version function (#3065, thanks @nuka137!)
    • Reformat labels for small importance scores in plotly.plot_param_importances (#3073, thanks @xadrianzetx!)
    • Speed up Matplotlib backend plot_contour using SciPy's spsolve (#3092)
    • Remove updates in cached storage (#3120, thanks @shu65!)
    • Reduce number of queries to fetch directions, user_attrs and system_attrs of study summaries (#3108)
    • Support FloatDistribution across codebase (#3111, thanks @xadrianzetx!)
    • Use json.loads to decode pruner configuration loaded from environment variables (#3114)
    • Show progress bar based on timeout (#3115, thanks @xadrianzetx!)
    • Support IntDistribution across codebase (#3126, thanks @nyanhi!)
    • Make progress bar available with n_jobs!=1 (#3138, thanks @masap!)
    • Wrap RedisStorage in CachedStorage (#3204, thanks @masap!)
    • Use functools.wraps in track_in_mlflow decorator (#3216)
    • Make RedisStorage fast when running multiple trials (#3262, thanks @masap!)
    • Reduce database query result for Study.ask() (#3274, thanks @masap!)
    • Enable cache for study.tell() (#3265, thanks @masap!)
    • Warn if heartbeat is used with ask-and-tell (#3273)
    • Make optuna.study.get_all_study_summaries() of RedisStorage fast (#3278, thanks @masap!)
    • Improve Ctrl-C interruption handling (#3374, thanks @CorentinNeovision!)
    • Use same colormap among plotly visualization methods (#3376)
    • Make EDF plots handle trials with nonfinite values (#3435)
    • Make logger message optional in filter_nonfinite (#3438)
    • Set precision of sqlalchemy.Float in RDBStorage table definition (#3327)
    • Accept nan in trial.report (#3348, thanks @belldandyxtq!)
    • Lazy import of alembic, sqlalchemy, and scipy (#3381)
    • Unify pareto front (#3389, thanks @semiexp!)
    • Make set_trial_param() of RedisStorage faster (#3391, thanks @masap!)
    • Make _set_best_trial() of RedisStorage faster (#3392, thanks @masap!)
    • Make set_study_directions() of RedisStorage faster (#3393, thanks @masap!)
    • Make optuna compatible with wandb sweep panels (#3403, thanks @captain-pool!)
    • Change "#Trials" to "Trial" in plot_slice, plot_pareto_front, and plot_optimization_history (#3449, thanks @dubey-anshuman!)
    • Make contour plots handle trials with nonfinite values (#3451)
    • Query studies for trials only once in EDF plots (#3460)
    • Make Parallel-Coordinate plots handle trials with nonfinite values (#3471, thanks @divyanshugit!)
    • Separate heartbeat functionality from BaseStorage (#3475)
    • Remove torch.distributed calls from TorchDistributedTrial properties (#3490, thanks @nlgranger!)
    • Remove the internal logic that calculates the interaction of two or more variables in fANOVA (#3543)
    • Handle inf/-inf for trial_values table in RDB (#3559)
    • Add intermediate_value_type column to represent inf/-inf on RDBStorage (#3564)
    • Move is_heartbeat_enabled from storage to heartbeat (#3596)
    • Refactor ImportanceEvaluators (#3597)
    • Avoid maximum limit when MLflow saves information (#3651)
    • Control metric decimal digits precision in bayesmark benchmark report (#3693)
    • Support inf values for crowding distance (#3743)
    • Normalize importance values (#3828)

    Bug Fixes

    • Add tests of sample_relative and fix type of return values of SkoptSampler and PyCmaSampler (#2897)
    • Fix GridSampler with RetryFailedTrialCallback or enqueue_trial (#2946)
    • Fix the type of trial.values in MLflow integration (#2991)
    • Fix to raise ValueError for invalid q in DiscreteUniformDistribution (#3001)
    • Do not call trial.report during sanity check (#3002)
    • Fix matplotlib.plot_contour bug (#3046, thanks @IEP!)
    • Handle single distributions in fANOVA evaluator (#3085, thanks @xadrianzetx!)
    • Fix bug of nondeterministic behavior of TPESampler when group=True (#3187, thanks @xuzijian629!)
    • Handle non-numerical params in matplotlib.contour_plot (#3213, thanks @xadrianzetx!)
    • Fix log scale axes padding in matplotlib.contour_plot (#3218, thanks @xadrianzetx!)
    • Handle -inf and inf values in RDBStorage (#3238, thanks @xadrianzetx!)
    • Skip limiting the value if it is nan (#3286)
    • Make TPE work with a categorical variable with different choice types (#3190, thanks @keisukefukuda!)
    • Fix axis range issue in matplotlib contour plot (#3249, thanks @harupy!)
    • Allow fail_state_trials show warning when heartbeat is enabled (#3301)
    • Clip untransformed values sampled from int uniform distributions (#3319)
    • Fix missing user_attrs and system_attrs in study summaries (#3352)
    • Fix objective scale in parallel coordinate of Matplotlib (#3369)
    • Fix matplotlib.plot_parallel_coordinate with log distributions (#3371)
    • Fix parallel coordinate with missing value (#3373)
    • Add utility to filter trials with inf values from visualizations (#3395)
    • Return the best trial number, not worst trial number by best_index_ (#3410)
    • Avoid using px.colors.sequential.Blues that introduces pandas dependency (#3422)
    • Fix _is_reverse_scale (#3424)
    • Import COLOR_SCALE inside import util context (#3492)
    • Remove -v option of optuna study set-user-attr command (#3499, thanks @nyanhi!)
    • Filter trials with nonfinite value in optuna.visualization.plot_param_importances and optuna.visualization.matplotlib.plot_param_importance (#3500, thanks @takoika!)
    • Fix --verbose and --quiet options in CLI (#3532, thanks @nyanhi!)
    • Replace ValueError with RuntimeError in get_best_trial (#3541)
    • Take the same search space as in CategoricalDistribution by GridSampler (#3544)
    • Fix CategoricalDistribution with NaN (#3567)
    • Fix NaN comparison in grid sampler (#3592)
    • Fix bug in IntersectionSearchSpace (#3666)
    • Remove trial_values records whose values are None (#3668)
    • Fix PostgreSQL primary key unsorted problem (#3702, thanks @wattlebirdaz!)
    • Raise error on NaN in _constrained_dominates (#3738)
    • Fix inf-related issue on implementation of _calculate_nondomination_rank (#3739)
    • Raise errors for NaN in constraint values (#3740)
    • Fix _calculate_weights such that it throws ValueError on invalid weights (#3742)
    • Change warning for axis_order of plot_pareto_front (#3802)
    • Fix check for number of objective values (#3808)
    • Raise ValueError when waiting trial is told (#3814)
    • Fix Study.tell with invalid values (#3819)
    • Fix infeasible case in NSGAII test (#3839)

    Installation

    • Support scikit-learn v1.0.0 (#3003)
    • Pin tensorflow and tensorflow-estimator versions to <2.7.0 (#3059)
    • Add upper version constraint of PyTorchLightning (#3077)
    • Pin keras version to <2.7.0 (#3078)
    • Remove version constraints of tensorflow (#3084)
    • Bump to torch related packages (#3156)
    • Use pytorch-lightning>=1.5.0 (#3157)
    • Remove testoutput from doctest of mlflow integration (#3170)
    • Restrict nltk version (#3201)
    • Add version constraints of setuptools (#3207)
    • Remove version constraint of setuptools (#3231)
    • Remove Sphinx version constraint (#3237)
    • Drop TensorFlow support for Python 3.6 (#3296)
    • Pin AllenNLP version (#3367)
    • Skip run fastai job on Python 3.6 (#3412)
    • Avoid latest click==8.1.0 that removed a deprecated feature (#3413)
    • Avoid latest PyTorch lightning until integration is updated (#3417)
    • Revert "Avoid latest click==8.1.0 that removed a deprecated feature" (#3430)
    • Partially support Python 3.10 (#3353)
    • Clean up setup.py (#3517)
    • Remove duplicate requirements from document section (#3613)
    • Add a version constraint of cached-path (#3665)
    • Relax version constraint of fakeredis (#3905)
    • Add version constraint for typing_extensions to use ParamSpec (#3926)

    Documentation

    • Add note of the behavior when calling multiple trial.report (#2980)
    • Add note for DDP training of pytorch-lightning (#2984)
    • Add note to OptunaSearchCV about direction (#3007)
    • Clarify n_trials in the docs (#3016, thanks @Rohan138!)
    • Add a note to use pickle with different optuna versions (#3034)
    • Unify the visualization docs (#3041, thanks @sidshrivastav!)
    • Fix a grammatical error in FAQ doc (#3051, thanks @belldandyxtq!)
    • Less ambiguous documentation for optuna tell (#3052)
    • Add example for logging.set_verbosity (#3061, thanks @drumehiron!)
    • Mention the tutorial of 002_configurations.py in the Trial API page (#3067, thanks @makkimaki!)
    • Mention the tutorial of 003_efficient_optimization_algorithms.py in the Trial API page (#3068, thanks @makkimaki!)
    • Add link from set_user_attrs in Study to the user_attrs entry in Tutorial (#3069, thanks @MasahitoKumada!)
    • Update description for missing samplers and pruners (#3087, thanks @masaaldosey!)
    • Simplify the unit testing explanation (#3089)
    • Fix range description in suggest_float docstring (#3091, thanks @xadrianzetx!)
    • Fix documentation for the package installation procedure on different OS (#3118, thanks @masap!)
    • Add description of ValueError and TypeErorr to Raises section of Trial.report (#3124, thanks @MasahitoKumada!)
    • Add a note logging_callback only works in single process situation (#3143)
    • Correct FrozenTrial's docstring (#3161)
    • Promote to use of v3.0.0a0 in README.md (#3167)
    • Mention tutorial of callback for Study.optimize from API page (#3171, thanks @xuzijian629!)
    • Add reference to tutorial page in study.enqueue_trial (#3172, thanks @knshnb!)
    • Fix typo in specify_params (#3174, thanks @knshnb!)
    • Guide to tutorial of Multi-objective Optimization in visualization tutorial (#3182, thanks @xuzijian629!)
    • Add explanation about Parallelize Optimization at FAQ (#3186, thanks @MasahitoKumada!)
    • Add order in tutorial (#3193, thanks @makinzm!)
    • Fix inconsistency in distributions documentation (#3222, thanks @xadrianzetx!)
    • Add FAQ entry for heartbeat (#3229)
    • Replace AUC with accuracy in docs (#3242)
    • Fix Raises section of FloatDistribution docstring (#3248, thanks @xadrianzetx!)
    • Add {Float,Int}Distribution to docs (#3252)
    • Update explanation for metrics of AllenNLPExecutor (#3253)
    • Add missing cli methods to the list (#3268)
    • Add docstring for property DiscreteUniformDistribution.q (#3279)
    • Add reference to tutorial page in CLI (#3267, thanks @tsukudamayo!)
    • Carry over notes on step behavior to new distributions (#3276)
    • Correct the disable condition of show_progress_bar (#3287)
    • Add a document to lead FAQ and example of heartbeat (#3294)
    • Add a note for copy_study: it creates a copy regardless of its state (#3295)
    • Add note to recommend Python 3.8 or later in documentation build with artifacts (#3312)
    • Fix crossover references in Raises doc section (#3315)
    • Add reference to QMCSampler in tutorial (#3320)
    • Fix layout in tutorial (with workaround) (#3322)
    • Scikit-learn required for plot_param_importances (#3332, thanks @ll7!)
    • Add a link to multi-objective tutorial from a pareto front page (#3339, thanks @kei-mo!)
    • Add reference to tutorial page in visualization (#3340, thanks @Hiroyuki-01!)
    • Mention tutorials of User-Defined Sampler/Pruner from the API reference pages (#3342, thanks @hppRC!)
    • Add reference to saving/resuming study with RDB backend (#3345, thanks @Hiroyuki-01!)
    • Fix a typo (#3360)
    • Remove deprecated command optuna study optimize in FAQ (#3364)
    • Fix nit typo (#3380)
    • Add see also section for best_trial (#3396, thanks @divyanshugit!)
    • Updates the tutorial page for re-use the best trial (#3398, thanks @divyanshugit!)
    • Add explanation about Study.best_trials in multi-objective optimization tutorial (#3443)
    • Clean up exception docstrings (#3429)
    • Revise docstring in MLFlow and WandB callbacks (#3477)
    • Change the parameter name from classifier to regressor in the code snippet of README.md (#3481)
    • Add link to Minituna in CONTRIBUTING.md (#3482)
    • Fix benchmarks/README.md for the bayesmark section (#3496)
    • Mention Study.stop as a criteria to stop creating trials in document (#3498, thanks @takoika!)
    • Fix minor English errors in the docstring of study.optimize (#3505)
    • Add Python 3.10 in supported version in README.md (#3508)
    • Remove articles at the beginning of sentences in crossovers (#3509)
    • Correct FronzenTrial's docstring (#3514)
    • Mention specify hyperparameter tutorial (#3515)
    • Fix typo in MLFlow callback (#3533)
    • Improve docstring of GridSampler's seed option (#3568)
    • Add the samplers comparison table (#3571)
    • Replace youtube.com with youtube-nocookie.com (#3590)
    • Fix time complexity of the samplers comparison table (#3593)
    • Remove language from docs configuration (#3594)
    • Add documentation of SHAP integration (#3623)
    • Remove news entry on Optuna user survey (#3645)
    • Introduce optuna-fast-fanova (#3647)
    • Add github discussions link (#3660)
    • Fix a variable name of ask-and-tell tutorial (#3663)
    • Clarify which trials are used for importance evaluators (#3707)
    • Fix typo in Study.optimize (#3720, thanks @29Takuya!)
    • Update link to plotly's jupyterlab-support page (#3722, thanks @29Takuya!)
    • Update CONTRIBUTING.md (#3726)
    • Remove "Edit on Github" button (#3777, thanks @cfkazu!)
    • Remove duplicated period at the end of copyright (#3778)
    • Add note for deprecation of plot_pareto_front's axis_order (#3803)
    • Describe the purpose of prepare_study_with_trials (#3809)
    • Fix a typo in docstring of ShapleyImportanceEvaluator (#3810)
    • Add a reference for MOTPE (#3838, thanks @y0z!)
    • Minor fixes of sampler comparison table (#3850)
    • Fix typo: Replace trail with trial (#3861)
    • Add .. seealso:: in Study.get_trials and Study.trials (#3862, thanks @jmsykes83!)
    • Add docstring of TrialState.is_finished (#3869)
    • Fix docstring in FrozenTrial (#3872, thanks @wattlebirdaz!)
    • Add note to explain when colormap reverses (#3873)
    • Make NSGAIISampler docs informative (#3880)
    • Add note for constant_liar with multi-objective function (#3881)
    • Use copybutton_prompt_text not to copy the bash prompt (#3882)
    • Fix typo in HyperbandPruner (#3894)
    • Improve HyperBand docs (#3900)
    • Mention reproducibility of HyperBandPruner (#3901)
    • Add a new note to mention unsupported GPU case for CatBoostPruningCallback (#3903)

    Examples

    • Use RetryFailedTrialCallback in pytorch_checkpoint example (https://github.com/optuna/optuna-examples/pull/59, thanks @xadrianzetx!)
    • Add Python 3.9 to CI yaml files (https://github.com/optuna/optuna-examples/pull/61)
    • Replace suggest_uniform with suggest_float (https://github.com/optuna/optuna-examples/pull/63)
    • Remove deprecated warning message in lightgbm (https://github.com/optuna/optuna-examples/pull/64)
    • Pin tensorflow and tensorflow-estimator versions to <2.7.0 (https://github.com/optuna/optuna-examples/pull/66)
    • Restrict upper version of pytorch-lightning (https://github.com/optuna/optuna-examples/pull/67)
    • Add an external resource to README.md (https://github.com/optuna/optuna-examples/pull/68, thanks @solegalli!)
    • Add pytorch-lightning DDP example (https://github.com/optuna/optuna-examples/pull/43, thanks @tohmae!)
    • Install latest AllenNLP (https://github.com/optuna/optuna-examples/pull/73)
    • Restrict nltk version (https://github.com/optuna/optuna-examples/pull/75)
    • Add version constraints of setuptools (https://github.com/optuna/optuna-examples/pull/76)
    • Remove constraint of setuptools (https://github.com/optuna/optuna-examples/pull/79)
    • Remove Python 3.6 from haiku's CI (https://github.com/optuna/optuna-examples/pull/83)
    • Apply black 22.1.0 & run checks daily (https://github.com/optuna/optuna-examples/pull/84)
    • Add hiplot example (https://github.com/optuna/optuna-examples/pull/86)
    • Stop running jobs using TF with Python3.6 (https://github.com/optuna/optuna-examples/pull/87)
    • Pin AllenNLP version (https://github.com/optuna/optuna-examples/pull/89)
    • Add Medium link (https://github.com/optuna/optuna-examples/pull/91)
    • Use official CatBoostPruningCallback (https://github.com/optuna/optuna-examples/pull/92)
    • Stop running fastai job on Python 3.6 (https://github.com/optuna/optuna-examples/pull/93)
    • Specify Python version using str in workflow files (https://github.com/optuna/optuna-examples/pull/95)
    • Introduce upper version constraint of PyTorchLightning (https://github.com/optuna/optuna-examples/pull/96)
    • Update SimulatedAnnealingSampler to support FloatDistribution (https://github.com/optuna/optuna-examples/pull/97)
    • Fix version of JAX (https://github.com/optuna/optuna-examples/pull/99)
    • Remove constraints by #99 (https://github.com/optuna/optuna-examples/pull/100)
    • Replace some methods in the sklearn example (https://github.com/optuna/optuna-examples/pull/102, thanks @MasahitoKumada!)
    • Add Python3.10 in allennlp.yml (https://github.com/optuna/optuna-examples/pull/104)
    • Remove numpy (https://github.com/optuna/optuna-examples/pull/105)
    • Add python 3.10 to fastai CI (https://github.com/optuna/optuna-examples/pull/106)
    • Add python 3.10 to non-integration examples CIs (https://github.com/optuna/optuna-examples/pull/107)
    • Add python 3.10 to Hiplot CI (https://github.com/optuna/optuna-examples/pull/108)
    • Add a comma to visualization.yml (https://github.com/optuna/optuna-examples/pull/109)
    • Rename WandB example to follow naming rules (https://github.com/optuna/optuna-examples/pull/110)
    • Add scikit-learn version constraint for Dask-ML (https://github.com/optuna/optuna-examples/pull/112)
    • Add python 3.10 to sklearn CI (https://github.com/optuna/optuna-examples/pull/113)
    • Set version constraint of protobuf in PyTorch Lightning example (https://github.com/optuna/optuna-examples/pull/116)
    • Introduce stale bot (https://github.com/optuna/optuna-examples/pull/119)
    • Use Hydra 1.2 syntax (https://github.com/optuna/optuna-examples/pull/122)
    • Fix CI due to thop (https://github.com/optuna/optuna-examples/pull/123)
    • Hotfix allennlp dependency (https://github.com/optuna/optuna-examples/pull/124)
    • Remove unreferenced variable in pytorch_simple.py (https://github.com/optuna/optuna-examples/pull/125)
    • set OMPI_MCA_rmaps_base_oversubscribe=yes before mpirun (https://github.com/optuna/optuna-examples/pull/126)
    • Add python 3.10 to python-version (https://github.com/optuna/optuna-examples/pull/127)
    • Remove upper version constraint of sklearn (https://github.com/optuna/optuna-examples/pull/128)
    • Move catboost integration line to integration section from pruning section (https://github.com/optuna/optuna-examples/pull/129)
    • Simplify skimage example (https://github.com/optuna/optuna-examples/pull/130)
    • Remove deprecated warning in PyTorch Lightning example (https://github.com/optuna/optuna-examples/pull/131)
    • Resolve TODO task in ray example (https://github.com/optuna/optuna-examples/pull/132)
    • Remove version constraint of cached-path (https://github.com/optuna/optuna-examples/pull/133)

    Tests

    • Add test case of samplers for conditional objective function (#2904)
    • Test int distributions with default step (#2924)
    • Be aware of trial preparation when checking heartbeat interval (#2982)
    • Simplify the DDP model definition in the test of pytorch-lightning (#2983)
    • Wrap data with np.asarray in lightgbm test (#2997)
    • Patch calls to deprecated suggest APIs across codebase (#3027, thanks @xadrianzetx!)
    • Make return_cvbooster of LightGBMTuner consistent to the original value (#3070, thanks @abatomunkuev!)
    • Fix parametrize_sampler (#3080)
    • Fix verbosity for tests/integration_tests/lightgbm_tuner_tests/test_optimize.py (#3086, thanks @nyanhi!)
    • Generalize empty search space test case to all hyperparameter importance evaluators (#3096, thanks @xadrianzetx!)
    • Check if texts in legend by order agnostic way (#3103)
    • Add tests for axis scales to matplotlib.plot_slice (#3121)
    • Add tests for transformer with upper bound parameter (#3163)
    • Add tests in visualization_tests/matplotlib_tests/test_slice.py (#3175, thanks @keisukefukuda!)
    • Add test case of the value in optimization history with matplotlib (#3176, thanks @TakuyaInoue-github!)
    • Add tests for generated plots of matplotlib.plot_edf (#3178, thanks @makinzm!)
    • Improve pareto front figure tests for matplotlib (#3183, thanks @akawashiro!)
    • Add tests for generated plots of plot_edf (#3188, thanks @makinzm!)
    • Match contour tests between Plotly and Matplotlib (#3192, thanks @belldandyxtq!)
    • Implement missing matplotlib.contour_plot test (#3232, thanks @xadrianzetx!)
    • Unify the validation function of edf value between visualization backends (#3233)
    • Add test for default grace period (#3263, thanks @masap!)
    • Add the missing tests of Plotly's plot_parallel_coordinate (#3266, thanks @MasahitoKumada!)
    • Switch function order progbar tests (#3280, thanks @BasLaa!)
    • Add plot value tests to matplotlib_tests/test_param_importances (#3180, thanks @belldandyxtq!)
    • Make tests of plot_optimization_history methods consistent (#3234)
    • Add integration test for RedisStorage (#3258, thanks @masap!)
    • Change the order of arguments in the catalyst integration test (#3308)
    • Cleanup MLflowCallback tests (#3378)
    • Test serialize/deserialize storage on parametrized conditions (#3407)
    • Add tests for parameter of 'None' for TPE (#3447)
    • Improve matplotlib parallel coordinate test (#3368)
    • Save figures for all matplotlib tests (#3414, thanks @divyanshugit!)
    • Add inf test to intermediate values test (#3466)
    • Add test cases for test_storages.py (#3480)
    • Improve the tests of optuna.visualization.plot_pareto_front (#3546)
    • Move heartbeat-related tests in test_storages.py to another file (#3553)
    • Use seed method of np.random.RandomState for reseeding and fix test_reseed_rng (#3569)
    • Refactor test_get_observation_pairs (#3574)
    • Add tests for inf/nan objectives for ShapleyImportanceEvaluator (#3576)
    • Add deprecated warning test to the multi-objective sampler test file (#3601)
    • Simplify multi-objective TPE tests (#3653)
    • Add edge cases to multi-objective TPE tests (#3662)
    • Remove tests on TypeError (#3667)
    • Add edge cases to the tests of the parzen estimator (#3673)
    • Add tests for _constrained_dominates (#3683)
    • Refactor tests of constrained TPE (#3689)
    • Add inf and NaN tests for test_constraints_func (#3690)
    • Fix calling storage API in study tests (#3695, thanks @wattlebirdaz!)
    • DRY test_frozen.py (#3696)
    • Unify the tests of plot_contours (#3701)
    • Add test cases for crossovers of NSGAII (#3705)
    • Enhance the tests of NSGAIISampler._crowding_distance_sort (#3706)
    • Unify edf test files (#3730)
    • Fix test_calculate_weights_below (#3741)
    • Refactor test_intermediate_plot.py (#3745)
    • Test samplers are reproducible (#3757)
    • Add tests for _dominates function (#3764)
    • DRY importance tests (#3785)
    • Move tests for create_trial (#3794)
    • Remove with_c_d option from prepare_study_with_trials (#3799)
    • Use DeterministicRelativeSampler in test_trial.py (#3807)
    • Add tests for _fast_non_dominated_sort (#3686)
    • Unify slice plot tests (#3784)
    • Unify the tests of plot_parallel_coordinates (#3800)
    • Unify optimization history tests (#3806)
    • Suppress warnings in tests for multi_objective module (#3911)
    • Remove warnings: UserWarning from tests/visualization_tests/test_utils.py (#3919, thanks @jmsykes83!)

    Code Fixes

    • Add test case of samplers for conditional objective function (#2904)
    • Fix #2949, remove BaseStudy (#2986, thanks @twsl!)
    • Use optuna.load_study in optuna ask CLI to omit direction/directions option (#2989)
    • Fix typo in Trial warning message (#3008, thanks @xadrianzetx!)
    • Replaces boston dataset with california housing dataset (#3011, thanks @avats-dev!)
    • Fix deprecation version of suggest APIs (#3054, thanks @xadrianzetx!)
    • Add remove_version to the missing @deprecated argument (#3064, thanks @nuka137!)
    • Add example of optuna.logging.get_verbosity (#3066, thanks @MasahitoKumada!)
    • Support {Float|Int}Distribution in NSGA-II crossover operators (#3139, thanks @xadrianzetx!)
    • Black fix (#3147)
    • Switch to FloatDistribution (#3166, thanks @xadrianzetx!)
    • Remove deprecated decorator of the feature of n_jobs (#3173, thanks @MasahitoKumada!)
    • Fix black and blackdoc errors (#3260, thanks @masap!)
    • Remove experimental label from MaxTrialsCallback (#3261, thanks @knshnb!)
    • Remove redundant _check_trial_id (#3264, thanks @masap!)
    • Make existing int/float distributions wrapper of {Int,Float}Distribution (#3244)
    • Switch to IntDistribution (#3181, thanks @nyanhi!)
    • Fix type hints for Python 3.8 (#3240)
    • Remove UniformDistribution, LogUniformDistribution and DiscreteUniformDistribution code paths (#3275)
    • Merge set_trial_state() and set_trial_values() into one function (#3323, thanks @masap!)
    • Follow up for {Float, Int}Distributions (#3337, thanks @nyanhi!)
    • Move the get_trial_xxx abstract functions to base (#3338, thanks @belldandyxtq!)
    • Update type hints of states (#3359, thanks @BasLaa!)
    • Remove unused function from RedisStorage (#3394, thanks @masap!)
    • Remove unnecessary string concatenation (#3406)
    • Follow coding style and fix typos in tests/integration_tests (#3408)
    • Fix log message formatting in filter_nonfinite (#3436)
    • Add RetryFailedTrialCallback to optuna.storages.* (#3441)
    • Unify fail_stale_trials in each storage implementation (#3442, thanks @knshnb!)
    • Ignore incomplete trials in matplotlib.plot_parallel_coordinate (#3415)
    • Update warning message and add a test when a trial fails with exception (#3454)
    • Remove old distributions from NSGA-II sampler (#3459)
    • Remove duplicated DB access in _log_completed_trial (#3551)
    • Reduce the number of copy.deepcopy() calls in importance module (#3554)
    • Remove duplicated check_trial_is_updatable (#3557)
    • Replace optuna.testing.integration.create_running_trial with study.ask (#3562)
    • Refactor test_get_observation_pairs (#3574)
    • Update label of feasible trials if constraints_func is specified (#3587)
    • Replace unused variable name with underscore (#3588)
    • Enable no-implicit-optional for mypy (#3599, thanks @harupy!)
    • Enable warn_redundant_casts for mypy (#3602, thanks @harupy!)
    • Refactor the type of value of TrialIntermediateValueModel (#3603)
    • Fix broken mypy checks of Alembic's get_current_head() method (#3608)
    • Move heartbeat-related thread operation in _optimize.py to _heartbeat.py (#3609)
    • Sort dependencies by name (#3614)
    • Add typehint for deprecated and experimental (#3575)
    • Remove useless object inheritance (#3628, thanks @harupy!)
    • Remove useless except clauses (#3632, thanks @harupy!)
    • Rename optuna.testing.integration with optuna.testing.pruner (#3638)
    • Cosmetic fix in Optuna CLI (#3641)
    • Enable strict_equality for mypy #3579 (#3648, thanks @wattlebirdaz!)
    • Make file names in testing consistent with optuna module (#3657)
    • Remove the implementation of read_trials_from_remote_storage in the all storages apart from CachedStorage (#3659)
    • Remove unnecessary deep copy in Redis storage (#3672, thanks @wattlebirdaz!)
    • Workaround mypy bug (#3679)
    • Unify plot_contours (#3682)
    • Remove storage.get_all_study_summaries(include_best_trial: bool) (#3697, thanks @wattlebirdaz!)
    • Unify the logic of edf functions (#3698)
    • Unify the logic of plot_param_importances functions (#3700)
    • Enable disallow_untyped_calls for mypy (#3704, thanks @29Takuya!)
    • Use get_trials with states argument to filter trials depending on trial state (#3708)
    • Return Python's native float values (#3714)
    • Simplify bayesmark benchmark report rendering (#3725)
    • Unify the logic of intermediate plot (#3731)
    • Unify the logic of slice plot (#3732)
    • Unify the logic of plot_parallel_coordinates (#3734)
    • Unify implementation of plot_optimization_history between plotly and matplotlib (#3736)
    • Extract fail_objective and pruned_objective for tests (#3737)
    • Remove deprecated storage functions (#3744, thanks @29Takuya!)
    • Remove unnecessary optionals from visualization/_pareto_front.py (#3752)
    • Change types inside _ParetoInfoType (#3753)
    • Refactor pareto front (#3754)
    • Use _ContourInfo to plot in plot_contour (#3755)
    • Follow up #3465 (#3763)
    • Refactor importances plot (#3765)
    • Remove no_trials option of prepare_study_with_trials (#3766)
    • Follow the coding style of comments in plot_contour files (#3767)
    • Raise ValueError for invalid returned type of target in _filter_nonfinite (#3768)
    • Fix value error condition in plot_contour (#3769)
    • DRY constraints in Sampler.after_trial (#3775)
    • DRY stop_objective (#3786)
    • Refactor non-exist param test in plot_contour test (#3787)
    • Remove less_than_two and more_than_three options from prepare_study_with_trials (#3789)
    • Fix return value's type of _get_node_value (#3818)
    • Remove unused type: ignore (#3832)
    • Fix typos and remove unused argument in QMCSampler (#3837)
    • Unify tests for plot_param_importances (#3760)
    • Refactor test_pareto_front (#3798)
    • Remove duplicated definition of CategoricalChoiceType from optuna.distributions (#3846)
    • Revert most of changes by 3651 (#3848)
    • Attach abstractmethod decorator to BaseStorage.get_trial_id_from_study_id_trial_number (#3870, thanks @wattlebirdaz!)
    • Refactor BaseStorage.get_best_trial (#3871, thanks @wattlebirdaz!)
    • Simplify IntersectionSearchSpace.calculate (#3887)
    • Replace q with step in private function and warning message (#3913)
    • Reduce warnings in storage tests (#3917)
    • Reduce trivial warning messages from tests/sampler_tests (#3921)

    Continuous Integration

    • Install botorch to CI jobs on mac (#2988)
    • Use libomp 11.1.0 for Mac (#3024)
    • Run mac-tests CI at a scheduled time (#3028)
    • Set concurrency to github workflows (#3095)
    • Skip CLI tests when calculating the coverage (#3097)
    • Migrate mypy version to 0.910 (#3123)
    • Avoid installing the latest MLfow to prevent doctests from failing (#3135)
    • Use python 3.8 for CI and docker (#3026)
    • Add performance benchmarks using kurobako (#3155)
    • Use Python 3.7 in checks CI job (#3239)
    • Add performance benchmarks using bayesmark (#3354)
    • Fix speed benchmarks (#3362)
    • Pin setuptools (#3427)
    • Introduce the benchmark for multi-objectives samplers (#3271, thanks @drumehiron!)
    • Use coverage directly (#3347, thanks @higucheese!)
    • Add WFG benchmark test (#3349, thanks @kei-mo!)
    • Add workflow to use reviewdog (#3357)
    • Add NASBench201 from NASLib (#3465)
    • Fix speed benchmarks CI (#3470)
    • Support PyTorch 1.11.0 (#3510)
    • Install 3rd party libraries in CI for lint (#3580)
    • Make bayesmark benchmark results comparable to kurobako (#3584)
    • Restore virtualenv for benchmark extras (#3585)
    • Use protobuf<4.0.0 to resolve Sphinx CI error (#3591)
    • Unpin protobuf (#3598, thanks @harupy!)
    • Extract MPI tests from integration CI as independent CI (#3606)
    • Enable warn_unused_ignores for mypy (#3627, thanks @harupy!)
    • Add onnx and version constrained protobuf to document dependencies (#3658)
    • Add mo-kurobako benchmark to CI (#3691)
    • Enable mypy's strict configs (#3710)
    • Run visual regression tests to find regression bugs of visualization module (#3721)
    • Remove downloading old libomp for mac tests (#3728)
    • Match Python versions between bayesmark CI jobs (#3750)
    • Set OMPI_MCA_rmaps_base_oversubscribe=yes before mpirun (#3758)
    • Add budget option to benchmarks (#3774)
    • Add n_concurrency option to benchmarks (#3776)
    • Use n-runs instead of repeat to represent the number of studies in the bayesmark benchmark (#3780)
    • Fix type hints for mypy 0.971 (#3797)
    • Pin scipy to avoid the CI failure (#3834)
    • Extract float value from tensor for trial.report in PyTorchLightningPruningCallback (#3842)

    Other

    • Bump up version to 2.11.0dev (#2976)
    • Add roadmap news to README.md (#2999)
    • Bump up version number to 3.0.0a1.dev (#3006)
    • Add Python 3.9 to tox.ini (#3025)
    • Fix version number to 3.0.0a0 (#3140)
    • Bump up version to v3.0.0a1.dev (#3142)
    • Introduce a form to make TODOs explicit when creating issues (#3169)
    • Bump up version to v3.0.0b0.dev (#3289)
    • Add description field for question-and-help-support (#3305)
    • Update README to inform v3.0.0a2 (#3314)
    • Add Optuna-related URLs for PyPi (#3355, thanks @andriyor!)
    • Bump Optuna to v3.0.0-b0 (#3458)
    • Bump up version to v3.0.0b1.dev (#3457)
    • Fix kurobako benchmark code to run it locally (#3468)
    • Fix label of issue template (#3493)
    • Improve issue templates (#3536)
    • Hotfix for fakeredis 1.7.4 release (#3549)
    • Remove the version constraint of fakeredis (#3561)
    • Relax version constraint of fakeredis (#3607)
    • Shorten the durations of the stale bot for PRs (#3611)
    • Clarify the criteria to assign reviewers in the PR template (#3619)
    • Bump up version number to v3.0.0rc0.dev (#3621)
    • Make tox.ini consistent with checking (#3654)
    • Avoid to stale description-checked issues (#3816)
    • Bump up version to v3.0.0.dev (#3852)
    • Bump up version to v3.0.0 (#3933)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @29Takuya, @BasLaa, @CorentinNeovision, @Crissman, @HideakiImamura, @Hiroyuki-01, @IEP, @MasahitoKumada, @Rohan138, @TakuyaInoue-github, @abatomunkuev, @akawashiro, @andriyor, @avats-dev, @belldandyxtq, @belltailjp, @c-bata, @captain-pool, @cfkazu, @chezou, @contramundum53, @divyanshugit, @drumehiron, @dubey-anshuman, @fukatani, @g-votte, @gasin, @harupy, @higucheese, @himkt, @hppRC, @hvy, @jmsykes83, @kasparthommen, @kei-mo, @keisuke-umezawa, @keisukefukuda, @knshnb, @kstoneriv3, @liaison, @ll7, @makinzm, @makkimaki, @masaaldosey, @masap, @nlgranger, @not522, @nuka137, @nyanhi, @nzw0301, @semiexp, @shu65, @sidshrivastav, @sile, @solegalli, @takoika, @tohmae, @toshihikoyanase, @tsukudamayo, @tupui, @twsl, @wattlebirdaz, @xadrianzetx, @xuzijian629, @y0z, @yoshinobc, @ytsmiling

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0-rc0(Aug 8, 2022)

    This is the release note of v3.0.0-rc0. This is a release candidate of Optuna V3. We plan to release the major version within a few weeks. Please try this version and report bugs!

    Highlights

    Constrained Optimization Support for TPE

    TPESampler, the default sampler of Optuna, now supports constrained optimization. It takes a function constraints_func as an argument, and examines whether trials are feasible or not. Feasible trials are prioritized over infeasible ones similarly to NSGAIISampler. See #3506 for more details.

    def objective(trial):
        # Binh and Korn function with constraints.
        x = trial.suggest_float("x", -15, 30)
        y = trial.suggest_float("y", -15, 30)
    
        # Store the constraints as user attributes so that they can be restored after optimization.
        c0 = (x - 5) ** 2 + y ** 2 - 25
        c1 = -((x - 8) ** 2) - (y + 3) ** 2 + 7.7
        trial.set_user_attr("constraints", (c0, c1))
    
        v0 = 4 * x ** 2 + 4 * y ** 2
        v1 = (x - 5) ** 2 + (y - 5) ** 2
    
        return v0, v1
    
    def constraints(trial):
        return trial.user_attrs["constraints"]
    
    if __name__ == "__main__":
        sampler = optuna.samplers.TPESampler(
            constraints_func=constraints,
        )
        study = optuna.create_study(
            directions=["minimize", "minimize"],
            sampler=sampler,
        )
        study.optimize(objective, n_trials=1000)
    
        optuna.visualization.plot_pareto_front(study, constraints_func=constraints).show()
    

    | MOTPE without constraints | MOTPE with constraints | | - | - | | 165096660-da2e0134-0d82-4d94-bca5-0fd3b0bd0250 | 165097179-baa92240-253c-4e86-b2d2-a7225c215375 |

    A Major Refactoring of Visualization Module

    We have undertaken major refactoring of the visualization features as one of the major tasks of Optuna V3. The current situation is as follows.

    Unification of implementations of different backends: plotly and matplotlib

    Historically, the implementations of Optuna's visualization features were split between two different backends, plotly and matplotlib. Many of these implementations were duplicated and unmaintainable, and many were implemented as a single large function, resulting in poor testability and, as a result, becoming the cause of many bugs. We clarified the specifications that each visualization function in Optuna must meet and defined the backend-independent information needed to perform the visualization. By using this information commonly across different backends, we achieved a highly maintainable and testable implementation, and improved the stability of the visualization functions dramatically. We are currently rewriting the unit tests, and the resulting tests will be simple yet powerful.

    Visual Regression Test

    It is very important to detect hidden bugs in the implementation through PR reviews. However, visualizations are likely to contain bugs that are difficult to find just by reading the code, and many of these bugs are only revealed when the visualization is actually performed. Therefore, we introduced the Visual Regression Test to improve the review process. In the PR for visualization features, you can jump to the Visual Regression Test link by clicking on the link generated from within the PR. Reviewers can verify that the PR implementation is performing the visualization properly.

    173838319-24433136-bd59-47d5-afdb-2694aafe354d (1)

    Improve Code Quality Including Many Bugfix

    In the latter development cycle of Optuna v3, we put emphasis on improving the overall code quality of the library. We fixed several bugs and possible corruption of internal data structures on e.g. handling Inf/NaN values (#3567, #3592, #3738, #3739, #3740) and invalid inputs (#3668, #3808, #3814, #3819). For example, there had been bugs before v3 when NaN values were used in a CategoricalDistribution or GridSampler. In several other functions, NaN values were unacceptable but the library failed silently without any warning or error. Such bugs are fixed in this release.

    New Features

    • Add constraints option to TPESampler (#3506)
    • Add skip_if_exists argument to enqueue_trial (#3629)
    • Remove experimental from plot_pareto_front (#3643)
    • Add popsize argument to CmaEsSampler (#3649)
    • Add seed argument for BoTorchSampler (#3756)
    • Add seed argument for SkoptSampler (#3791)
    • Revert AllenNLP integration back to experimental (#3822)

    Enhancements

    • Move is_heartbeat_enabled from storage to heartbeat (#3596)
    • Refactor ImportanceEvaluators (#3597)
    • Avoid maximum limit when MLflow saves information (#3651)
    • Control metric decimal digits precision in bayesmark benchmark report (#3693)
    • Support inf values for crowding distance (#3743)
    • Normalize importance values (#3828)

    Bug Fixes

    • Fix CategoricalDistribution with NaN (#3567)
    • Fix NaN comparison in grid sampler (#3592)
    • Fix bug in IntersectionSearchSpace (#3666)
    • Remove trial_values records whose values are None (#3668)
    • Fix PostgreSQL primary key unsorted problem (#3702, thanks @wattlebirdaz!)
    • Raise error on NaN in _constrained_dominates (#3738)
    • Fix inf-related issue on implementation of _calculate_nondomination_rank (#3739)
    • Raise errors for NaN in constraint values (#3740)
    • Fix _calculate_weights such that it throws ValueError on invalid weights (#3742)
    • Change warning for axis_order of plot_pareto_front (#3802)
    • Fix check for number of objective values (#3808)
    • Raise ValueError when waiting trial is told (#3814)
    • Fix Study.tell with invalid values (#3819)
    • Fix infeasible case in NSGAII test (#3839)

    Installation

    • Add a version constraint of cached-path (#3665)

    Documentation

    • Add documentation of SHAP integration (#3623)
    • Remove news entry on Optuna user survey (#3645)
    • Introduce optuna-fast-fanova (#3647)
    • Add github discussions link (#3660)
    • Fix a variable name of ask-and-tell tutorial (#3663)
    • Clarify which trials are used for importance evaluators (#3707)
    • Fix typo in Study.optimize (#3720, thanks @29Takuya!)
    • Update link to plotly's jupyterlab-support page (#3722, thanks @29Takuya!)
    • Update CONTRIBUTING.md (#3726)
    • Remove "Edit on Github" button (#3777, thanks @cfkazu!)
    • Remove duplicated period at the end of copyright (#3778)
    • Add note for deprecation of plot_pareto_front's axis_order (#3803)
    • Describe the purpose of prepare_study_with_trials (#3809)
    • Fix a typo in docstring of ShapleyImportanceEvaluator (#3810)
    • Add a reference for MOTPE (#3838, thanks @y0z!)

    Examples

    • Introduce stale bot (https://github.com/optuna/optuna-examples/pull/119)
    • Use Hydra 1.2 syntax (https://github.com/optuna/optuna-examples/pull/122)
    • Fix CI due to thop (https://github.com/optuna/optuna-examples/pull/123)
    • Hotfix allennlp dependency (https://github.com/optuna/optuna-examples/pull/124)
    • Remove unreferenced variable in pytorch_simple.py (https://github.com/optuna/optuna-examples/pull/125)
    • set OMPI_MCA_rmaps_base_oversubscribe=yes before mpirun (https://github.com/optuna/optuna-examples/pull/126)

    Tests

    • Simplify multi-objective TPE tests (#3653)
    • Add edge cases to multi-objective TPE tests (#3662)
    • Remove tests on TypeError (#3667)
    • Add edge cases to the tests of the parzen estimator (#3673)
    • Add tests for _constrained_dominates (#3683)
    • Refactor tests of constrained TPE (#3689)
    • Add inf and NaN tests for test_constraints_func (#3690)
    • Fix calling storage API in study tests (#3695, thanks @wattlebirdaz!)
    • DRY test_frozen.py (#3696)
    • Unify the tests of plot_contours (#3701)
    • Add test cases for crossovers of NSGAII (#3705)
    • Enhance the tests of NSGAIISampler._crowding_distance_sort (#3706)
    • Unify edf test files (#3730)
    • Fix test_calculate_weights_below (#3741)
    • Refactor test_intermediate_plot.py (#3745)
    • Test samplers are reproducible (#3757)
    • Add tests for _dominates function (#3764)
    • DRY importance tests (#3785)
    • Move tests for create_trial (#3794)
    • Remove with_c_d option from prepare_study_with_trials (#3799)
    • Use DeterministicRelativeSampler in test_trial.py (#3807)

    Code Fixes

    • Add typehint for deprecated and experimental (#3575)
    • Move heartbeat-related thread operation in _optimize.py to _heartbeat.py (#3609)
    • Remove useless object inheritance (#3628, thanks @harupy!)
    • Remove useless except clauses (#3632, thanks @harupy!)
    • Rename optuna.testing.integration with optuna.testing.pruner (#3638)
    • Cosmetic fix in Optuna CLI (#3641)
    • Enable strict_equality for mypy #3579 (#3648, thanks @wattlebirdaz!)
    • Make file names in testing consistent with optuna module (#3657)
    • Remove the implementation of read_trials_from_remote_storage in the all storages apart from CachedStorage (#3659)
    • Remove unnecessary deep copy in Redis storage (#3672, thanks @wattlebirdaz!)
    • Workaround mypy bug (#3679)
    • Unify plot_contours (#3682)
    • Remove storage.get_all_study_summaries(include_best_trial: bool) (#3697, thanks @wattlebirdaz!)
    • Unify the logic of edf functions (#3698)
    • Unify the logic of plot_param_importances functions (#3700)
    • Enable disallow_untyped_calls for mypy (#3704, thanks @29Takuya!)
    • Use get_trials with states argument to filter trials depending on trial state (#3708)
    • Return Python's native float values (#3714)
    • Simplify bayesmark benchmark report rendering (#3725)
    • Unify the logic of intermediate plot (#3731)
    • Unify the logic of slice plot (#3732)
    • Unify the logic of plot_parallel_coordinates (#3734)
    • Unify implementation of plot_optimization_history between plotly and matplotlib (#3736)
    • Extract fail_objective and pruned_objective for tests (#3737)
    • Remove deprecated storage functions (#3744, thanks @29Takuya!)
    • Remove unnecessary optionals from visualization/_pareto_front.py (#3752)
    • Change types inside _ParetoInfoType (#3753)
    • Refactor pareto front (#3754)
    • Use _ContourInfo to plot in plot_contour (#3755)
    • Follow up #3465 (#3763)
    • Refactor importances plot (#3765)
    • Remove no_trials option of prepare_study_with_trials (#3766)
    • Follow the coding style of comments in plot_contour files (#3767)
    • Raise ValueError for invalid returned type of target in _filter_nonfinite (#3768)
    • Fix value error condition in plot_contour (#3769)
    • DRY constraints in Sampler.after_trial (#3775)
    • DRY stop_objective (#3786)
    • Refactor non-exist param test in plot_contour test (#3787)
    • Remove less_than_two and more_than_three options from prepare_study_with_trials (#3789)
    • Fix return value's type of _get_node_value (#3818)
    • Remove unused type: ignore (#3832)
    • Fix typos and remove unused argument in QMCSampler (#3837)

    Continuous Integration

    • Use coverage directly (#3347, thanks @higucheese!)
    • Install 3rd party libraries in CI for lint (#3580)
    • Make bayesmark benchmark results comparable to kurobako (#3584)
    • Enable warn_unused_ignores for mypy (#3627, thanks @harupy!)
    • Add onnx and version constrained protobuf to document dependencies (#3658)
    • Add mo-kurobako benchmark to CI (#3691)
    • Enable mypy's strict configs (#3710)
    • Run visual regression tests to find regression bugs of visualization module (#3721)
    • Remove downloading old libomp for mac tests (#3728)
    • Match Python versions between bayesmark CI jobs (#3750)
    • Set OMPI_MCA_rmaps_base_oversubscribe=yes before mpirun (#3758)
    • Add budget option to benchmarks (#3774)
    • Add n_concurrency option to benchmarks (#3776)
    • Use n-runs instead of repeat to represent the number of studies in the bayesmark benchmark (#3780)
    • Fix type hints for mypy 0.971 (#3797)
    • Pin scipy to avoid the CI failure (#3834)
    • Extract float value from tensor for trial.report in PyTorchLightningPruningCallback (#3842)

    Other

    • Clarify the criteria to assign reviewers in the PR template (#3619)
    • Bump up version number to v3.0.0rc0.dev (#3621)
    • Make tox.ini consistent with checking (#3654)
    • Avoid to stale description-checked issues (#3816)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @29Takuya, @HideakiImamura, @c-bata, @cfkazu, @contramundum53, @g-votte, @harupy, @higucheese, @himkt, @hvy, @keisuke-umezawa, @knshnb, @not522, @nzw0301, @sile, @toshihikoyanase, @wattlebirdaz, @xadrianzetx, @y0z

    Source code(tar.gz)
    Source code(zip)
  • v2.10.1(Jun 13, 2022)

    This is the release note of v2.10.1.

    This is a patch release to resolve the issues on the document build. No feature updates are included.

    Installation

    • Fix document build of v2.10.1 (#3642)

    Documentation

    • Backport #3590: Replace youtube.com with youtube-nocookie.com (#3633)

    Other

    • Bump up version to v2.10.1 (#3635)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @contramundum53, @toshihikoyanase

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0-b1(Jun 6, 2022)

    This is the release note of v3.0.0-b1.

    Highlights

    A Samplers Comparison Table

    We added a sampler comparison table on the samplers' documentation page. It includes supported options (parameter types, pruning, multi-objective optimization, constrained optimization, etc.), time complexity, and recommended budgets for each sampler. Please use this to select appropriate samplers for your tasks! See #3571 and #3593 for more details.

    sampler_comparison_table

    A New Importance Evaluator: ShapleyImportanceEvaluator

    Optuna now supports mean absolute SHAP value for evaluating parameter importances through integration with the SHAP library. SHAP value is a game-theoretic measure of parameter importance featuring nice theoretical properties (See paper for more information).

    168213146-465a8116-94f2-49c9-b4ce-ec12970d82f8

    To use mean absolute SHAP importances, an object of optuna.integration.shap.ShapleyImportanceEvaluator can be passed to evaluator argument in optuna.visualization.plot_param_importances or optuna.importance.get_param_importances.

    import optuna
    from optuna.integration.shap import ShapleyImportanceEvaluator
    
    study = optuna.create_study()
    study.optimize(objective, n_trials=100)
    
    optuna.visualization.plot_param_importances(study, evaluator=ShapleyImportanceEvaluator())
    

    See the #3507 for more details.

    A New Benchmarking Task

    The benchmarking environment for black-box optimization algorithms on the GitHub Actions was introduced in the previous release. We have further enhanced its capabilities. The benchmarking functionality introduced can be run on all users' forks using GitHub Actions. You can also freely customize and run benchmarks on more computationally powerful clusters, for example, AWS, using the code in the optuna/benchmarks directory.

    Neural Architecture Search Benchmark Support

    Optuna's algorithms can now be benchmarked using NASLib, the Neural Architecture Search benchmark library. For now we only support one dataset, NASBench 201, which deals with image recognition. Larger datasets and datasets from other areas such as natural language processing will be supported in the future.

    | cifar10 | cifar100 | imagenet16-120| | ---- | ---- | ---- | | cifar10 | cifar100 | imagenet16 |

    See README and #3465 for more information.

    Multi-objective Optimization Benchmark Support

    We are now able to benchmark our multi-objective optimization algorithms. They are not yet available on GitHub Actions, but you can use optuna/benchmarks/run_mo_kurobakmo.py directly. They will be available on GitHub Actions in the next release, so stay tuned! See #3271 and #3349 for more details.

    Python 3.10 Support

    This is the first version to officially support Python 3.10. All tests are passed including integration modules, with a few exceptions.

    Storage Database Migration

    To use Optuna v3.0.0-b1 with RDBStorage that was created in the previous versions of Optuna, please run optuna storage upgrade to migrate your database.

    # `YOUR_RDB_URL` is the URL of your database.
    optuna storage upgrade –storage YOUR_RDB_URL
    

    If you use RedisStorage, copy your study with RDBStorage using copy_study with the Optuna you used to create the study, thenrun optuna storage upgrade with Optuna v3.0.0-b0. After upgrading the storage, copy the study back as a new RedisStorage.

    python -c ‘import optuna; optuna.copy_study(from_study_name=”example”, from_storage=”redis://localhost:6379”, to_storage=”sqlite:///upgrade.db”)
    pip install –pre -U optuna
    optuna storage upgrade –storage sqlite:///upgrade.db
    python -c ‘import optuna; optuna.copy_study(from_study_name="example", from_storage="sqlite:///upgrade.db", to_study_name="new-example", to_storage="redis://localhost:6379")’
    

    Breaking Changes

    • Fix distribution compatibility for linear and logarithmic distribution (#3444)
    • Remove get_study_id_from_trial_id (#3538)

    New Features

    • Add targets argument to plot_pareto_plont of plotly backend (#3495, thanks @TakuyaInoue-github!)
    • Support constraints_func in plot_pareto_front in matplotlib visualization (#3497, thanks @fukatani!)
    • Calculate the feature importance with mean absolute SHAP values (#3507, thanks @liaison!)
    • Make GridSampler reproducible (#3527, thanks @gasin!)
    • Replace ValueError with warning in GridSearchSampler (#3545)
    • Implement callbacks argument of OptunaSearchCV (#3577)
    • Add option to skip table creation to RDBStorage (#3581)

    Enhancements

    • Set precision of sqlalchemy.Float in RDBStorage table definition (#3327)
    • Accept nan in trial.report (#3348, thanks @belldandyxtq!)
    • Lazy import of alembic, sqlalchemy, and scipy (#3381)
    • Unify pareto front (#3389, thanks @semiexp!)
    • Make set_trial_param() of RedisStorage faster (#3391, thanks @masap!)
    • Make _set_best_trial() of RedisStorage faster (#3392, thanks @masap!)
    • Make set_study_directions() of RedisStorage faster (#3393, thanks @masap!)
    • Make optuna compatible with wandb sweep panels (#3403, thanks @captain-pool!)
    • Change "#Trials" to "Trial" in plot_slice, plot_pareto_front, and plot_optimization_history (#3449, thanks @dubey-anshuman!)
    • Make contour plots handle trials with nonfinite values (#3451)
    • Query studies for trials only once in EDF plots (#3460)
    • Make Parallel-Coordinate plots handle trials with nonfinite values (#3471, thanks @divyanshugit!)
    • Separate heartbeat functionality from BaseStorage (#3475)
    • Remove torch.distributed calls from TorchDistributedTrial properties (#3490, thanks @nlgranger!)
    • Remove the internal logic that calculates the interaction of two or more variables in fANOVA (#3543)
    • Handle inf/-inf for trial_values table in RDB (#3559)
    • Add intermediate_value_type column to represent inf/-inf on RDBStorage (#3564)

    Bug Fixes

    • Import COLOR_SCALE inside import util context (#3492)
    • Remove -v option of optuna study set-user-attr command (#3499, thanks @nyanhi!)
    • Filter trials with nonfinite value in optuna.visualization.plot_param_importances and optuna.visualization.matplotlib.plot_param_importance (#3500, thanks @takoika!)
    • Fix --verbose and --quiet options in CLI (#3532, thanks @nyanhi!)
    • Replace ValueError with RuntimeError in get_best_trial (#3541)
    • Take the same search space as in CategoricalDistribution by GridSampler (#3544)

    Installation

    • Partially support Python 3.10 (#3353)
    • Clean up setup.py (#3517)
    • Remove duplicate requirements from document section (#3613)

    Documentation

    • Clean up exception docstrings (#3429)
    • Revise docstring in MLFlow and WandB callbacks (#3477)
    • Change the parameter name from classifier to regressor in the code snippet of README.md (#3481)
    • Add link to Minituna in CONTRIBUTING.md (#3482)
    • Fix benchmarks/README.md for the bayesmark section (#3496)
    • Mention Study.stop as a criteria to stop creating trials in document (#3498, thanks @takoika!)
    • Fix minor English errors in the docstring of study.optimize (#3505)
    • Add Python 3.10 in supported version in README.md (#3508)
    • Remove articles at the beginning of sentences in crossovers (#3509)
    • Correct FronzenTrial's docstring (#3514)
    • Mention specify hyperparameter tutorial (#3515)
    • Fix typo in MLFlow callback (#3533)
    • Improve docstring of GridSampler's seed option (#3568)
    • Add the samplers comparison table (#3571)
    • Replace youtube.com with youtube-nocookie.com (#3590)
    • Fix time complexity of the samplers comparison table (#3593)
    • Remove language from docs configuration (#3594)

    Examples

    • Fix version of JAX (https://github.com/optuna/optuna-examples/pull/99)
    • Remove constraints by #99 (https://github.com/optuna/optuna-examples/pull/100)
    • Replace some methods in the sklearn example (https://github.com/optuna/optuna-examples/pull/102, thanks @MasahitoKumada!)
    • Add Python3.10 in allennlp.yml (https://github.com/optuna/optuna-examples/pull/104)
    • Remove numpy (https://github.com/optuna/optuna-examples/pull/105)
    • Add python 3.10 to fastai CI (https://github.com/optuna/optuna-examples/pull/106)
    • Add python 3.10 to non-integration examples CIs (https://github.com/optuna/optuna-examples/pull/107)
    • Add python 3.10 to Hiplot CI (https://github.com/optuna/optuna-examples/pull/108)
    • Add a comma to visualization.yml (https://github.com/optuna/optuna-examples/pull/109)
    • Rename WandB example to follow naming rules (https://github.com/optuna/optuna-examples/pull/110)
    • Add scikit-learn version constraint for Dask-ML (https://github.com/optuna/optuna-examples/pull/112)
    • Add python 3.10 to sklearn CI (https://github.com/optuna/optuna-examples/pull/113)
    • Set version constraint of protobuf in PyTorch Lightning example (https://github.com/optuna/optuna-examples/pull/116)

    Tests

    • Improve matplotlib parallel coordinate test (#3368)
    • Save figures for all matplotlib tests (#3414, thanks @divyanshugit!)
    • Add inf test to intermediate values test (#3466)
    • Add test cases for test_storages.py (#3480)
    • Improve the tests of optuna.visualization.plot_pareto_front (#3546)
    • Move heartbeat-related tests in test_storages.py to another file (#3553)
    • Use seed method of np.random.RandomState for reseeding and fix test_reseed_rng (#3569)
    • Refactor test_get_observation_pairs (#3574)
    • Add tests for inf/nan objectives for ShapleyImportanceEvaluator (#3576)
    • Add deprecated warning test to the multi-objective sampler test file (#3601)

    Code Fixes

    • Ignore incomplete trials in matplotlib.plot_parallel_coordinate (#3415)
    • Update warning message and add a test when a trial fails with exception (#3454)
    • Remove old distributions from NSGA-II sampler (#3459)
    • Remove duplicated DB access in _log_completed_trial (#3551)
    • Reduce the number of copy.deepcopy() calls in importance module (#3554)
    • Remove duplicated check_trial_is_updatable (#3557)
    • Replace optuna.testing.integration.create_running_trial with study.ask (#3562)
    • Refactor test_get_observation_pairs (#3574)
    • Update label of feasible trials if constraints_func is specified (#3587)
    • Replace unused variable name with underscore (#3588)
    • Enable no-implicit-optional for mypy (#3599, thanks @harupy!)
    • Enable warn_redundant_casts for mypy (#3602, thanks @harupy!)
    • Refactor the type of value of TrialIntermediateValueModel (#3603)
    • Fix broken mypy checks of Alembic's get_current_head() method (#3608)
    • Sort dependencies by name (#3614)

    Continuous Integration

    • Introduce the benchmark for multi-objectives samplers (#3271, thanks @drumehiron!)
    • Add WFG benchmark test (#3349, thanks @kei-mo!)
    • Add workflow to use reviewdog (#3357)
    • Add NASBench201 from NASLib (#3465)
    • Fix speed benchmarks CI (#3470)
    • Support PyTorch 1.11.0 (#3510)
    • Restore virtualenv for benchmark extras (#3585)
    • Use protobuf<4.0.0 to resolve Sphinx CI error (#3591)
    • Unpin protobuf (#3598, thanks @harupy!)
    • Extract MPI tests from integration CI as independent CI (#3606)

    Other

    • Bump up version to v3.0.0b1.dev (#3457)
    • Fix kurobako benchmark code to run it locally (#3468)
    • Fix label of issue template (#3493)
    • Improve issue templates (#3536)
    • Hotfix for fakeredis 1.7.4 release (#3549)
    • Remove the version constraint of fakeredis (#3561)
    • Relax version constraint of fakeredis (#3607)
    • Shorten the durations of the stale bot for PRs (#3611)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @HideakiImamura, @MasahitoKumada, @TakuyaInoue-github, @belldandyxtq, @c-bata, @captain-pool, @contramundum53, @divyanshugit, @drumehiron, @dubey-anshuman, @fukatani, @g-votte, @gasin, @harupy, @himkt, @hvy, @kei-mo, @keisuke-umezawa, @knshnb, @liaison, @masap, @nlgranger, @not522, @nyanhi, @nzw0301, @semiexp, @sile, @takoika, @toshihikoyanase, @xadrianzetx

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0-b0(Apr 12, 2022)

    This is the release note of v3.0.0-b0.

    Highlights

    Simplified Distribution Classes: Float, Int and Categorical

    Search space definitions, which consist of BaseDistribution and its child classes in Optuna, are greatly simplified. We have introduced FloatDistribution, IntDistribution, and CategoricalDistribution. If you use the suggest API and Study.optimize, the search space information is stored as these three distributions. Previous UniformDistribution, LogUniformDistribution, DiscreteUniformDistribution, IntUniformDistribution, and IntLogUniformDistribution are deprecated. If you pass deprecated distributions to APIs such as Study.ask or create_trial, they are internally converted to corresponding FloatDistribution or IntDistribution.

    Storage Database Migration

    To use Optuna v3.0.0-b0 with RDBStorage that was created in the previous versions of Optuna, please run optuna storage upgrade to migrate your database.

    If you use RedisStorage, copy your study with RDBStorage using copy_study with the Optuna you used to create the study, thenrun optuna storage upgrade with Optuna v3.0.0-b0. After upgrading the storage, copy the study back as a new RedisStorage.

    python -c ‘import optuna; optuna.copy_study(from_study_name=”example”, from_storage=”redis://localhost:6379”, to_storage=”sqlite:///upgrade.db”)
    pip install –pre -U optuna
    optuna storage upgrade –storage sqlite:///upgrade.db
    python -c ‘import optuna; optuna.copy_study(from_study_name="example", from_storage="sqlite:///upgrade.db", to_study_name="new-example", to_storage="redis://localhost:6379")’
    

    Consistent Ask-and-Tell Interface with Study.optimize

    Study.tell fails a trial when it is called with certain invalid combinations of state and values, instead of raising an error. This change aims to make Study.tell consistent with Study.optimize, which continues an optimization even if an objective returns an invalid value.

    Study.tell now also returns the resulting trial (FrozenTrial) in order to allow inspecting how the arguments were interpreted.

    Before

    Study.tell raises an exception when it is called with an invalid combination of state and values.

    study.tell(study.ask(), values=None)
    # Traceback (most recent call last):
    #   File "<stdin>", line 1, in <module>
    #   File "/…/optuna/optuna/study/study.py", line 579, in tell
    #     raise ValueError(
    # ValueError: No values were told. Values are required when state is TrialState.COMPLETE.
    

    After

    Study.tell automatically fails the trial.

    trial: FrozenTrial = study.tell(study.ask(), value=None)
    assert trial.state == TrialState.FAIL
    

    See #3144 for more details.

    Stable Study APIs

    We are converting all positional arguments of create_study, delete_study, load_study, and copy_study to keyword-only arguments since the order of arguments were inconsistent. This is not yet a breaking-change, but if you use these features with positional arguments, then you will get a warning message to use them with keyword-only arguments.

    In addition, we have fixed all of problems described in #2955, so we have stabled the Study APIs. Specifically, Study.add_trial, Study.add_trials, Study.enqueue_trial, and copy_study have been stabled.

    See #3270 and #2955 for more details.

    Improved Visualization

    Several bugs in the visualization module have been resolved. For instance, the parallel coordinates plot ignores trials with missing parameters (#3373) and the scale of the objective value is fixed (#3369). The edf plot filters trials with inf values (#3395 and #3435).

    Before: Trials with missing parameters are wrongly connected to each other. 158495812-acb399e5-d817-4cae-8c8b-c69bcc91efea

    After: Trials with missing parameters are removed from the plot. 158495810-98a5ec29-581c-426a-a255-11d16ae1c144

    Breaking Changes

    • Add option to exclude best trials from study summaries (#3109)
    • Migrate to {Float,Int}Distribution using alembic (#3113)
    • Move validation logic from _run_trial to study.tell (#3144)
    • Enable FloatDistribution and IntDistribution (#3246)
    • Use an enqueued parameter that is out of range from suggest API (#3298)
    • Convert deprecated distribution to new distribution internally (#3420)

    New Features

    • Add CatBoostPruningCallback (#2734, thanks @tohmae!)
    • Create common API for all NSGA-II crossover operations (#3221)
    • Add a history of retried trial numbers in Trial.system_attrs (#3223, thanks @belltailjp!)
    • Convert all positional arguments to keyword-only (#3270, thanks @higucheese!)
    • Stabilize study.py (#3309)
    • Add targets and deprecate axis_order in optuna.visualization.matplotlib.plot_pareto_front (#3341, thanks @shu65!)

    Enhancements

    • Enable cache for study.tell() (#3265, thanks @masap!)
    • Warn if heartbeat is used with ask-and-tell (#3273)
    • Make optuna.study.get_all_study_summaries() of RedisStorage fast (#3278, thanks @masap!)
    • Improve Ctrl-C interruption handling (#3374, thanks @CorentinNeovision!)
    • Use same colormap among plotly visualization methods (#3376)
    • Make EDF plots handle trials with nonfinite values (#3435)
    • Make logger message optional in filter_nonfinite (#3438)

    Bug Fixes

    • Make TPE work with a categorical variable with different choice types (#3190, thanks @keisukefukuda!)
    • Fix axis range issue in matplotlib contour plot (#3249, thanks @harupy!)
    • Allow fail_state_trials show warning when heartbeat is enabled (#3301)
    • Clip untransformed values sampled from int uniform distributions (#3319)
    • Fix missing user_attrs and system_attrs in study summaries (#3352)
    • Fix objective scale in parallel coordinate of Matplotlib (#3369)
    • Fix matplotlib.plot_parallel_coordinate with log distributions (#3371)
    • Fix parallel coordinate with missing value (#3373)
    • Add utility to filter trials with inf values from visualizations (#3395)
    • Return the best trial number, not worst trial number by best_index_ (#3410)
    • Avoid using px.colors.sequential.Blues that introduces pandas dependency (#3422)
    • Fix _is_reverse_scale (#3424)

    Installation

    • Drop TensorFlow support for Python 3.6 (#3296)
    • Pin AllenNLP version (#3367)
    • Skip run fastai job on Python 3.6 (#3412)
    • Avoid latest click==8.1.0 that removed a deprecated feature (#3413)
    • Avoid latest PyTorch lightning until integration is updated (#3417)
    • Revert "Avoid latest click==8.1.0 that removed a deprecated feature" (#3430)

    Documentation

    • Add reference to tutorial page in CLI (#3267, thanks @tsukudamayo!)
    • Carry over notes on step behavior to new distributions (#3276)
    • Correct the disable condition of show_progress_bar (#3287)
    • Add a document to lead FAQ and example of heartbeat (#3294)
    • Add a note for copy_study: it creates a copy regardless of its state (#3295)
    • Add note to recommend Python 3.8 or later in documentation build with artifacts (#3312)
    • Fix crossover references in Raises doc section (#3315)
    • Add reference to QMCSampler in tutorial (#3320)
    • Fix layout in tutorial (with workaround) (#3322)
    • Scikit-learn required for plot_param_importances (#3332, thanks @ll7!)
    • Add a link to multi-objective tutorial from a pareto front page (#3339, thanks @kei-mo!)
    • Add reference to tutorial page in visualization (#3340, thanks @Hiroyuki-01!)
    • Mention tutorials of User-Defined Sampler/Pruner from the API reference pages (#3342, thanks @hppRC!)
    • Add reference to saving/resuming study with RDB backend (#3345, thanks @Hiroyuki-01!)
    • Fix a typo (#3360)
    • Remove deprecated command optuna study optimize in FAQ (#3364)
    • Fix nit typo (#3380)
    • Add see also section for best_trial (#3396, thanks @divyanshugit!)
    • Updates the tutorial page for re-use the best trial (#3398, thanks @divyanshugit!)
    • Add explanation about Study.best_trials in multi-objective optimization tutorial (#3443)

    Examples

    • Remove Python 3.6 from haiku's CI (https://github.com/optuna/optuna-examples/pull/83)
    • Apply black 22.1.0 & run checks daily (https://github.com/optuna/optuna-examples/pull/84)
    • Add hiplot example (https://github.com/optuna/optuna-examples/pull/86)
    • Stop running jobs using TF with Python3.6 (https://github.com/optuna/optuna-examples/pull/87)
    • Pin AllenNLP version (https://github.com/optuna/optuna-examples/pull/89)
    • Add Medium link (https://github.com/optuna/optuna-examples/pull/91)
    • Use official CatBoostPruningCallback (https://github.com/optuna/optuna-examples/pull/92)
    • Stop running fastai job on Python 3.6 (https://github.com/optuna/optuna-examples/pull/93)
    • Specify Python version using str in workflow files (https://github.com/optuna/optuna-examples/pull/95)
    • Introduce upper version constraint of PyTorchLightning (https://github.com/optuna/optuna-examples/pull/96)
    • Update SimulatedAnnealingSampler to support FloatDistribution (https://github.com/optuna/optuna-examples/pull/97)

    Tests

    • Add plot value tests to matplotlib_tests/test_param_importances (#3180, thanks @belldandyxtq!)
    • Make tests of plot_optimization_history methods consistent (#3234)
    • Add integration test for RedisStorage (#3258, thanks @masap!)
    • Change the order of arguments in the catalyst integration test (#3308)
    • Cleanup MLflowCallback tests (#3378)
    • Test serialize/deserialize storage on parametrized conditions (#3407)
    • Add tests for parameter of 'None' for TPE (#3447)

    Code Fixes

    • Switch to IntDistribution (#3181, thanks @nyanhi!)
    • Fix type hints for Python 3.8 (#3240)
    • Remove UniformDistribution, LogUniformDistribution and DiscreteUniformDistribution code paths (#3275)
    • Merge set_trial_state() and set_trial_values() into one function (#3323, thanks @masap!)
    • Follow up for {Float, Int}Distributions (#3337, thanks @nyanhi!)
    • Move the get_trial_xxx abstract functions to base (#3338, thanks @belldandyxtq!)
    • Update type hints of states (#3359, thanks @BasLaa!)
    • Remove unused function from RedisStorage (#3394, thanks @masap!)
    • Remove unnecessary string concatenation (#3406)
    • Follow coding style and fix typos in tests/integration_tests (#3408)
    • Fix log message formatting in filter_nonfinite (#3436)
    • Add RetryFailedTrialCallback to optuna.storages.* (#3441)
    • Unify fail_stale_trials in each storage implementation (#3442, thanks @knshnb!)

    Continuous Integration

    • Add performance benchmarks using bayesmark (#3354)
    • Fix speed benchmarks (#3362)
    • Pin setuptools (#3427)

    Other

    • Bump up version to v3.0.0b0.dev (#3289)
    • Add description field for question-and-help-support (#3305)
    • Update README to inform v3.0.0a2 (#3314)
    • Add Optuna-related URLs for PyPi (#3355, thanks @andriyor!)
    • Bump Optuna to v3.0.0-b0 (#3458)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @BasLaa, @CorentinNeovision, @HideakiImamura, @Hiroyuki-01, @andriyor, @belldandyxtq, @belltailjp, @contramundum53, @divyanshugit, @harupy, @higucheese, @himkt, @hppRC, @hvy, @kei-mo, @keisuke-umezawa, @keisukefukuda, @knshnb, @ll7, @masap, @not522, @nyanhi, @nzw0301, @shu65, @sile, @tohmae, @toshihikoyanase, @tsukudamayo, @xadrianzetx

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0-a2(Feb 14, 2022)

    This is the release note of v3.0.0-a2.

    Highlights

    Study.optimize Warning Configuration Fix

    This is a small release that fixes a bug that the same warning message was emitted more than once when calling Study.optimize.

    Bug Fixes

    • [Backport] Allow fail_state_trials show warning when heartbeat is enabled (#3303)

    Other

    • Bump Optuna (#3302)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @HideakiImamura, @himkt

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0-a1(Feb 7, 2022)

    This is the release note of v3.0.0-a1.

    Highlights

    Second alpha pre-release in preparation for the upcoming major version update v3.

    Included are several new features, improved optimization algorithms, removals of deprecated interfaces and many quality of life improvements.

    To read about the entire v3 roadmap, please refer to the Wiki.

    While this is a pre-release, we encourage users to keep using the latest releases of Optuna, including this one, for a smoother transition to the coming major release. Early feedback is welcome!

    A New Algorithm: Quasi-Monte Carlo Sampler

    Now, you can utilize a new sampling algorithm based on the Quasi-Monte Carlo method, optuna.samplers.QMCSampler. This is oftentimes a good alternative to the existing optuna.samplers.RandomSampler. The generated (sampled) sequences have lower discrepancies compared to the standard random sequences, which are sampled uniformly. The following figures show the performance comparison to other existing samplers. Note that this algorithm is only supported for python >= 3.7.

    See #2423 for more details.

    Parkinson in HPOBench | Slice in HPOBench -- | -- hpo-bench-parkinson-1342958b65feb590e4c0284f73a7373aa1006595f85f3ce9a4645ea1bfa581d6 | hpo-bench-slice-5f2477ceae0c4ef5f4c2559346629b167d8768929dd754173010cdbcfe505631

    Constraints Support for Pareto-front plot

    The Pareto front plot now supports visualization of constrained optimization. In Optuna, NSGAIISampler and BoTorchSampler allow constrained optimization by taking a function constraints_func as argument, then examine whether trials are feasible or not. The optuna.visualization.plot_pareto_front receives a similar function and uses this function to plot the trials in different colors depending on whether they violate the constraints or not.

    See #3128 for more details.

    def objective(trial):
        # Binh and Korn function with constraints.
        x = trial.suggest_float("x", -15, 30)
        y = trial.suggest_float("y", -15, 30)
    
        # Store the constraints as user attributes so that they can be restored after optimization.
        c0 = (x - 5) ** 2 + y ** 2 - 25
        c1 = -((x - 8) ** 2) - (y + 3) ** 2 + 7.7
        trial.set_user_attr("constraints", (c0, c1))
    
        v0 = 4 * x ** 2 + 4 * y ** 2
        v1 = (x - 5) ** 2 + (y - 5) ** 2
    
        return v0, v1
    
    def constraints(trial):
        return trial.user_attrs["constraints"]
    
    if __name__ == "__main__":
        sampler = optuna.samplers.NSGAIISampler(
            constraints_func=constraints,
        )
        study = optuna.create_study(
            directions=["minimize", "minimize"],
            sampler=sampler,
        )
        study.optimize(objective, n_trials=1000)
    
        optuna.visualization.plot_pareto_front(study, constraints_func=constraints).show()
    

    release-note-2022-02-04-150346

    Distribution Cleanup

    We are actively working on cleaning up distributions for integer and floating-point. In Optuna v3, these distribution are unified to optuna.distributions.IntDistribution and optuna.distributions.FloatDistribution. v3.0.0-a1 contains several changes for this project and you will temporarily see UserWarning when you call Trial.suggest_int and Trial.suggest_float. We apologize for the inconvenience and the warning will be removed from the next release.

    See #2941 for more information.

    Stabilization of Experimental Modules

    We make AllenNLP integration and FrozenTrial.create_trial stable.

    See #3196 and #3228 for more information

    Breaking Changes

    • Remove type_checking.py (#3235)

    New Features

    • Add QMC sampler (#2423, thanks @kstoneriv3!)
    • Refactor pareto front and support constraints_func in plot_pareto_front (#3128, thanks @semiexp!)
    • Add skip_if_finished flag to Study.tell (#3150, thanks @xadrianzetx!)
    • Add user_attrs argument to Study.enqueue_trial (#3185, thanks @knshnb!)
    • Option to inherit intermediate values in RetryFailedTrialCallback (#3269, thanks @knshnb!)
    • Add setter method for DiscreteUniformDistribution.q (#3283)
    • Stabilize allennlp integrations (#3228)
    • Stabilize create_trial (#3196)

    Enhancements

    • Reduce number of queries to fetch directions, user_attrs and system_attrs of study summaries (#3108)
    • Support FloatDistribution across codebase (#3111, thanks @xadrianzetx!)
    • Use json.loads to decode pruner configuration loaded from environment variables (#3114)
    • Show progress bar based on timeout (#3115, thanks @xadrianzetx!)
    • Support IntDistribution across codebase (#3126, thanks @nyanhi!)
    • Make progress bar available with n_jobs!=1 (#3138, thanks @masap!)
    • Wrap RedisStorage in CachedStorage (#3204, thanks @masap!)
    • Use functools.wraps in track_in_mlflow decorator (#3216)
    • Make RedisStorage fast when running multiple trials (#3262, thanks @masap!)
    • Reduce database query result for Study.ask() (#3274, thanks @masap!)

    Bug Fixes

    • Fix bug of nondeterministic behavior of TPESampler when group=True (#3187, thanks @xuzijian629!)
    • Handle non-numerical params in matplotlib.contour_plot (#3213, thanks @xadrianzetx!)
    • Fix log scale axes padding in matplotlib.contour_plot (#3218, thanks @xadrianzetx!)
    • Handle -inf and inf values in RDBStorage (#3238, thanks @xadrianzetx!)
    • Skip limiting the value if it is nan (#3286)

    Installation

    • Bump to torch related packages (#3156)
    • Use pytorch-lightning>=1.5.0 (#3157)
    • Remove testoutput from doctest of mlflow integration (#3170)
    • Restrict nltk version (#3201)
    • Add version constraints of setuptools (#3207)
    • Remove version constraint of setuptools (#3231)
    • Remove Sphinx version constraint (#3237)

    Documentation

    • Add a note logging_callback only works in single process situation (#3143)
    • Correct FrozenTrial's docstring (#3161)
    • Promote to use of v3.0.0a0 in README.md (#3167)
    • Mention tutorial of callback for Study.optimize from API page (#3171, thanks @xuzijian629!)
    • Add reference to tutorial page in study.enqueue_trial (#3172, thanks @knshnb!)
    • Fix typo in specify_params (#3174, thanks @knshnb!)
    • Guide to tutorial of Multi-objective Optimization in visualization tutorial (#3182, thanks @xuzijian629!)
    • Add explanation about Parallelize Optimization at FAQ (#3186, thanks @MasahitoKumada!)
    • Add order in tutorial (#3193, thanks @makinzm!)
    • Fix inconsistency in distributions documentation (#3222, thanks @xadrianzetx!)
    • Add FAQ entry for heartbeat (#3229)
    • Replace AUC with accuracy in docs (#3242)
    • Fix Raises section of FloatDistribution docstring (#3248, thanks @xadrianzetx!)
    • Add {Float,Int}Distribution to docs (#3252)
    • Update explanation for metrics of AllenNLPExecutor (#3253)
    • Add missing cli methods to the list (#3268)
    • Add docstring for property DiscreteUniformDistribution.q (#3279)

    Examples

    • Add pytorch-lightning DDP example (https://github.com/optuna/optuna-examples/pull/43, thanks @tohmae!)
    • Install latest AllenNLP (https://github.com/optuna/optuna-examples/pull/73)
    • Restrict nltk version (https://github.com/optuna/optuna-examples/pull/75)
    • Add version constraints of setuptools (https://github.com/optuna/optuna-examples/pull/76)
    • Remove constraint of setuptools (https://github.com/optuna/optuna-examples/pull/79)

    Tests

    • Add tests for transformer with upper bound parameter (#3163)
    • Add tests in visualization_tests/matplotlib_tests/test_slice.py (#3175, thanks @keisukefukuda!)
    • Add test case of the value in optimization history with matplotlib (#3176, thanks @TakuyaInoue-github!)
    • Add tests for generated plots of matplotlib.plot_edf (#3178, thanks @makinzm!)
    • Improve pareto front figure tests for matplotlib (#3183, thanks @akawashiro!)
    • Add tests for generated plots of plot_edf (#3188, thanks @makinzm!)
    • Match contour tests between Plotly and Matplotlib (#3192, thanks @belldandyxtq!)
    • Implement missing matplotlib.contour_plot test (#3232, thanks @xadrianzetx!)
    • Unify the validation function of edf value between visualization backends (#3233)
    • Add test for default grace period (#3263, thanks @masap!)
    • Add the missing tests of Plotly's plot_parallel_coordinate (#3266, thanks @MasahitoKumada!)
    • Switch function order progbar tests (#3280, thanks @BasLaa!)

    Code Fixes

    • Black fix (#3147)
    • Switch to FloatDistribution (#3166, thanks @xadrianzetx!)
    • Remove deprecated decorator of the feature of n_jobs (#3173, thanks @MasahitoKumada!)
    • Fix black and blackdoc errors (#3260, thanks @masap!)
    • Remove experimental label from MaxTrialsCallback (#3261, thanks @knshnb!)
    • Remove redundant _check_trial_id (#3264, thanks @masap!)
    • Make existing int/float distributions wrapper of {Int,Float}Distribution (#3244)

    Continuous Integration

    • Use python 3.8 for CI and docker (#3026)
    • Add performance benchmarks using kurobako (#3155)
    • Use Python 3.7 in checks CI job (#3239)

    Other

    • Bump up version to v3.0.0a1.dev (#3142)
    • Introduce a form to make TODOs explicit when creating issues (#3169)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @BasLaa, @HideakiImamura, @MasahitoKumada, @TakuyaInoue-github, @akawashiro, @belldandyxtq, @g-votte, @himkt, @hvy, @keisuke-umezawa, @keisukefukuda, @knshnb, @kstoneriv3, @makinzm, @masap, @not522, @nyanhi, @nzw0301, @semiexp, @tohmae, @toshihikoyanase, @tupui, @xadrianzetx, @xuzijian629

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0-a0(Dec 6, 2021)

    This is the release note of v3.0.0-a0.

    Highlights

    First alpha pre-release in preparation for the upcoming major version update v3.

    Included are several new features, improved optimization algorithms, removals of deprecated interfaces and many quality of life improvements.

    To read about the entire v3 roadmap, please refer to the Wiki.

    While this is a pre-release, we encourage users to keep using the latest releases of Optuna, including this one, for a smoother transition to the coming major release. Early feedback is welcome!

    CLI Improvements

    Optuna CLI speed and usability improvements. Previously, it took several seconds to launch a CLI command, #3000 significantly speeds up the commands by halving the module load time.

    The usability of the ask-and-tell interface is also improved. The ask command allows users to define search space with short and simple JSON strings after #2905. The tell command supports --skip-if-finished which ignores duplicated reports of values and statuses instead of raising errors. It for instance improves robustness against pod retries on cluster environments. See #2905.

    Before:

    $ optuna ask --storage sqlite:///mystorage.db --study-name mystudy \
        --search-space '{"x": {"name": "UniformDistribution", "attributes": {"low": 0.0, "high": 1.0}}}'
    

    After:

    $ optuna ask --storage sqlite:///mystorage.db --study-name mystudy \
        --search-space '{"x": {"type": "float", "low": 0.0, "high": 1.0}}'
    

    New NSGA-II Crossover Options

    The optimization performance of NSGA-II has been greatly improved for real-valued problems. We introduce the crossover argument in NSGAIISampler. You can select several variants of the crossover option from uniform (default), blxalpha, sbx, vsbx, undx, and spx.

    The following figure shows that the newly introduced crossover algorithms perform better than existing algorithms, that is, the uniform crossover algorithm and the Gaussian process based algorithm, in terms of biasness, convergence, and diversity. Note that the previous method, other implementations (in kurobako), and the default of the new method are based on uniform crossover.

    See #2903 for more information.

    nsga2-nasbench

    New History Visualization with Multiple Studies

    The optimization history plot now supports visualization of multiple studies. It receives a list of studies. If the error_bar option is False, it outputs those histories in one figure. If the error_bar option is True, it calculates and shows the means and the standard deviations of those histories.

    See #2807 for more details.

    import optuna
    
    
    def objective(trial):
        return trial.suggest_float("x", 0, 1) ** 2
    
    
    n_studies = 5
    studies = [optuna.create_study(study_name=f"{i}th-study") for i in range(n_studies)]
    for study in studies:
        study.optimize(objective, n_trials=20)
    
    # This generates the first figure.
    fig = optuna.visualization.plot_optimization_history(studies)
    fig.write_image("./multiple.png")
    
    # This generates the second figure.
    fig = optuna.visualization.plot_optimization_history(studies, error_bar=True)
    fig.write_image("./error_bar.png")
    

    multiple error_bar

    AllenNLP Distributed Pruning

    The AllenNLP integration supports pruning in distributed environments. This change enables users to use the optuna_pruner callback option along with the distributed option as can be seen in the following training configuration. See #2977.

      ...
      trainer: {
        optimizer: 'adam',
        cuda_device: -1,
        callbacks: [
          {
            type: 'optuna_pruner',
          }
        ],
      },
      distributed: {
        cuda_devices: [-1, -1],
      },
    

    Preparations for Unification of Distributions Classes

    There are several implementations of BaseDistribution in Optuna, such as UniformDistribution, DiscreteUniformDistribution, IntUniformDistribution, CategoricalDistribution, This release includes part of ongoing work in reducing the number of these distribution classes to just FloatDistribution, IntDistribution, and CategoricalDistribution, aligning the classes to the trial suggest interface (suggest_float, suggest_int, and suggest_categorical). Please note that users are not recommended to use these distributions yet, because samplers haven’t been updated to support those. See #3063 for more details.

    Breaking Changes

    Some deprecated features including the optuna.structs module, LightGBMTuner.best_booster, and the optuna dashboard command are removed in #3057 and #3058. If you use such features please migrate to the new ones.

    | Removed APIs | Corresponding active APIs | | --- | --- | | optuna.structs.StudyDirection | optuna.study.StudyDirection | | optuna.structs.StudySummary | optuna.study.StudySummary | | optuna.structs.FrozenTrial | optuna.trial.FrozenTrial | | optuna.structs.TrialState | optuna.trial.TrialState | | optuna.structs.TrialPruned | optuna.exceptions.TrialPruned | | optuna.integration.lightgbm.LightGBMTuner.best_booster | optuna.integration.lightgbm.LightGBMTuner.get_best_booster | | optuna dashboard | optuna-dashboard |

    • Unify suggest APIs for floating-point parameters (#2990, thanks @xadrianzetx!)
    • Clean up deprecated features (#3057, thanks @nuka137!)
    • Remove optuna dashboard (#3058)

    New Features

    • Add interval for LightGBM callback (#2490)
    • Allow multiple studies and add error bar option to plot_optimization_history (#2807)
    • Support PyTorch-lightning DDP training (#2849, thanks @tohmae!)
    • Add crossover operators for NSGA-II (#2903, thanks @yoshinobc!)
    • Add abbreviated JSON formats of distributions (#2905)
    • Extend MLflowCallback interface (#2912, thanks @xadrianzetx!)
    • Support AllenNLP distributed pruning (#2977)
    • Make trial.user_attrs logging optional in MLflowCallback (#3043, thanks @xadrianzetx!)
    • Support multiple input of studies when plot with Matplotlib (#3062, thanks @TakuyaInoue-github!)
    • Add IntDistribution & FloatDistribution (#3063, thanks @nyanhi!)
    • Add trial.user_attrs to pareto_front hover text (#3082, thanks @kasparthommen!)
    • Support error bar for Matplotlib (#3122, thanks @TakuyaInoue-github!)
    • Add optuna tell with --skip-if-finished (#3131)

    Enhancements

    • Add single distribution support to BoTorchSampler (#2928)
    • Speed up import optuna (#3000)
    • Fix _contains of IntLogUniformDistribution (#3005)
    • Render importance scores next to bars in matplotlib.plot_param_importances (#3012, thanks @xadrianzetx!)
    • Make default value of verbose_eval NoneN forLightGBMTuner/LightGBMTunerCV` to avoid conflict (#3014, thanks @chezou!)
    • Unify colormap of plot_contour (#3017)
    • Relax FixedTrial and FrozenTrial allowing not-contained parameters during suggest_* (#3018)
    • Raise errors if optuna ask CLI receives --sampler-kwargs without --sampler (#3029)
    • Remove _get_removed_version_from_deprecated_version function (#3065, thanks @nuka137!)
    • Reformat labels for small importance scores in plotly.plot_param_importances (#3073, thanks @xadrianzetx!)
    • Speed up Matplotlib backend plot_contour using SciPy's spsolve (#3092)
    • Remove updates in cached storage (#3120, thanks @shu65!)

    Bug Fixes

    • Add tests of sample_relative and fix type of return values of SkoptSampler and PyCmaSampler (#2897)
    • Fix GridSampler with RetryFailedTrialCallback or enqueue_trial (#2946)
    • Fix the type of trial.values in MLflow integration (#2991)
    • Fix to raise ValueError for invalid q in DiscreteUniformDistribution (#3001)
    • Do not call trial.report during sanity check (#3002)
    • Fix matplotlib.plot_contour bug (#3046, thanks @IEP!)
    • Handle single distributions in fANOVA evaluator (#3085, thanks @xadrianzetx!)

    Installation

    • Support scikit-learn v1.0.0 (#3003)
    • Pin tensorflow and tensorflow-estimator versions to <2.7.0 (#3059)
    • Add upper version constraint of PyTorchLightning (#3077)
    • Pin keras version to <2.7.0 (#3078)
    • Remove version constraints of tensorflow (#3084)

    Documentation

    • Add note of the behavior when calling multiple trial.report (#2980)
    • Add note for DDP training of pytorch-lightning (#2984)
    • Add note to OptunaSearchCV about direction (#3007)
    • Clarify n_trials in the docs (#3016, thanks @Rohan138!)
    • Add a note to use pickle with different optuna versions (#3034)
    • Unify the visualization docs (#3041, thanks @sidshrivastav!)
    • Fix a grammatical error in FAQ doc (#3051, thanks @belldandyxtq!)
    • Less ambiguous documentation for optuna tell (#3052)
    • Add example for logging.set_verbosity (#3061, thanks @drumehiron!)
    • Mention the tutorial of 002_configurations.py in the Trial API page (#3067, thanks @makkimaki!)
    • Mention the tutorial of 003_efficient_optimization_algorithms.py in the Trial API page (#3068, thanks @makkimaki!)
    • Add link from set_user_attrs in Study to the user_attrs entry in Tutorial (#3069, thanks @MasahitoKumada!)
    • Update description for missing samplers and pruners (#3087, thanks @masaaldosey!)
    • Simplify the unit testing explanation (#3089)
    • Fix range description in suggest_float docstring (#3091, thanks @xadrianzetx!)
    • Fix documentation for the package installation procedure on different OS (#3118, thanks @masap!)
    • Add description of ValueError and TypeErorr to Raises section of Trial.report (#3124, thanks @MasahitoKumada!)

    Examples

    • Use RetryFailedTrialCallback in pytorch_checkpoint example (https://github.com/optuna/optuna-examples/pull/59, thanks @xadrianzetx!)
    • Add Python 3.9 to CI yaml files (https://github.com/optuna/optuna-examples/pull/61)
    • Replace suggest_uniform with suggest_float (https://github.com/optuna/optuna-examples/pull/63)
    • Remove deprecated warning message in lightgbm (https://github.com/optuna/optuna-examples/pull/64)
    • Pin tensorflow and tensorflow-estimator versions to <2.7.0 (https://github.com/optuna/optuna-examples/pull/66)
    • Restrict upper version of pytorch-lightning (https://github.com/optuna/optuna-examples/pull/67)
    • Add an external resource to README.md (https://github.com/optuna/optuna-examples/pull/68, thanks @solegalli!)

    Tests

    • Add test case of samplers for conditional objective function (#2904)
    • Test int distributions with default step (#2924)
    • Be aware of trial preparation when checking heartbeat interval (#2982)
    • Simplify the DDP model definition in the test of pytorch-lightning (#2983)
    • Wrap data with np.asarray in lightgbm test (#2997)
    • Patch calls to deprecated suggest APIs across codebase (#3027, thanks @xadrianzetx!)
    • Make return_cvbooster of LightGBMTuner consistent to the original value (#3070, thanks @abatomunkuev!)
    • Fix parametrize_sampler (#3080)
    • Fix verbosity for tests/integration_tests/lightgbm_tuner_tests/test_optimize.py (#3086, thanks @nyanhi!)
    • Generalize empty search space test case to all hyperparameter importance evaluators (#3096, thanks @xadrianzetx!)
    • Check if texts in legend by order agnostic way (#3103)
    • Add tests for axis scales to matplotlib.plot_slice (#3121)

    Code Fixes

    • Add test case of samplers for conditional objective function (#2904)
    • Fix #2949, remove BaseStudy (#2986, thanks @twsl!)
    • Use optuna.load_study in optuna ask CLI to omit direction/directions option (#2989)
    • Fix typo in Trial warning message (#3008, thanks @xadrianzetx!)
    • Replaces boston dataset with california housing dataset (#3011, thanks @avats-dev!)
    • Fix deprecation version of suggest APIs (#3054, thanks @xadrianzetx!)
    • Add remove_version to the missing @deprecated argument (#3064, thanks @nuka137!)
    • Add example of optuna.logging.get_verbosity (#3066, thanks @MasahitoKumada!)
    • Support {Float|Int}Distribution in NSGA-II crossover operators (#3139, thanks @xadrianzetx!)

    Continuous Integration

    • Install botorch to CI jobs on mac (#2988)
    • Use libomp 11.1.0 for Mac (#3024)
    • Run mac-tests CI at a scheduled time (#3028)
    • Set concurrency to github workflows (#3095)
    • Skip CLI tests when calculating the coverage (#3097)
    • Migrate mypy version to 0.910 (#3123)
    • Avoid installing the latest MLfow to prevent doctests from failing (#3135)

    Other

    • Bump up version to 2.11.0dev (#2976)
    • Add roadmap news to README.md (#2999)
    • Bump up version number to 3.0.0a1.dev (#3006)
    • Add Python 3.9 to tox.ini (#3025)
    • Fix version number to 3.0.0a0 (#3140)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @Crissman, @HideakiImamura, @IEP, @MasahitoKumada, @Rohan138, @TakuyaInoue-github, @abatomunkuev, @avats-dev, @belldandyxtq, @chezou, @drumehiron, @g-votte, @himkt, @hvy, @kasparthommen, @keisuke-umezawa, @makkimaki, @masaaldosey, @masap, @not522, @nuka137, @nyanhi, @nzw0301, @shu65, @sidshrivastav, @sile, @solegalli, @tohmae, @toshihikoyanase, @twsl, @xadrianzetx, @yoshinobc, @ytsmiling

    Source code(tar.gz)
    Source code(zip)
  • v2.10.0(Oct 4, 2021)

    This is the release note of v2.10.0.

    Highlights

    New CLI Subcommand for Analyzing Studies

    New subcommands optuna trials, optuna best-trial and optuna best-trials have been introduced to Optuna’s CLI for listing trials in studies with RDB storages. It allows direct interaction with trial data from the command line in various formats including human readable tables, JSON or YAML. See the following examples:

    Show all trials in a study.

    $ optuna trials --storage sqlite:///example.db --study-name example
    +--------+---------------------+---------------------+---------------------+----------------+---------------------+----------+
    | number |               value | datetime_start      | datetime_complete   | duration       | params              | state    |
    +--------+---------------------+---------------------+---------------------+----------------+---------------------+----------+
    |      0 |  0.6098421143538713 | 2021-10-01 14:36:46 | 2021-10-01 14:36:46 | 0:00:00.026059 | {'x': 'A', 'y': 6}  | COMPLETE |
    |      1 |  0.6584108953598753 | 2021-10-01 14:36:46 | 2021-10-01 14:36:46 | 0:00:00.023447 | {'x': 'A', 'y': 10} | COMPLETE |
    |      2 |   0.612883262548314 | 2021-10-01 14:36:46 | 2021-10-01 14:36:46 | 0:00:00.021577 | {'x': 'C', 'y': 3}  | COMPLETE |
    |      3 | 0.09326753798819143 | 2021-10-01 14:36:46 | 2021-10-01 14:36:46 | 0:00:00.024183 | {'x': 'A', 'y': 0}  | COMPLETE |
    |      4 |  0.7316749689191168 | 2021-10-01 14:36:46 | 2021-10-01 14:36:46 | 0:00:00.021994 | {'x': 'C', 'y': 4}  | COMPLETE |
    +--------+---------------------+---------------------+---------------------+----------------+---------------------+----------+
    

    Show the best trial as YAML.

    $ optuna best-trial --storage sqlite:///example.db --study-name example --format yaml
    datetime_complete: '2021-10-01 14:36:46'
    datetime_start: '2021-10-01 14:36:46'
    duration: '0:00:00.024183'
    number: 3
    params:
      x: A
      y: 0
    state: COMPLETE
    value: 0.09326753798819143
    

    Show the best trials of multi-objective optimization and train a neural network with one of the best parameters.

    $ STORAGE=sqlite:///example.db
    $ STUDY_NAME=example-mo
    $ optuna best-trials --storage $STORAGE --study-name $STUDY_NAME
    +--------+-------------------------------------------+---------------------+---------------------+----------------+--------------------------------------------------+----------+
    | number | values                                    | datetime_start      | datetime_complete   | duration       | params                                           | state    |
    +--------+-------------------------------------------+---------------------+---------------------+----------------+--------------------------------------------------+----------+
    |      0 | [0.23884292794146034, 0.6905832476748404] | 2021-10-01 15:02:32 | 2021-10-01 15:02:32 | 0:00:00.035815 | {'lr': 0.05318673615579818, 'optimizer': 'adam'} | COMPLETE |
    |      2 | [0.3157886300888031, 0.05110976427394465] | 2021-10-01 15:02:32 | 2021-10-01 15:02:32 | 0:00:00.030019 | {'lr': 0.08044012012204389, 'optimizer': 'sgd'}  | COMPLETE |
    +--------+-------------------------------------------+---------------------+---------------------+----------------+--------------------------------------------------+----------+
    
    $ optuna best-trials --storage $STORAGE --study-name $STUDY_NAME --format json > result.json
    $ OPTIMIZER=`jq '.[0].params.optimizer' result.json`
    $ LR=`jq '.[0].params.lr' result.json`
    $ python train.py $OPTIMIZER $LR
    

    See #2847 for more details.

    Multi-objective Optimization Support of Weights & Biases and MLflow Integrations

    Weights & Biases and MLflow integration modules support tracking multi-objective optimization. Now, they accept arbitrary numbers of objective values with metric names.

    Weights & Biases

    from optuna.integration import WeightsAndBiasesCallback
    
    wandbc = WeightsAndBiasesCallback(metric_name=["mse", "mae"])
    
    ...
    
    study = optuna.create_study(directions=["minimize", "minimize"])
    study.optimize(objective, n_trials=100, callbacks=[wandbc])
    

    image (1)

    MLflow

    from optuna.integration import MLflowCallback
    
    mlflc = MLflowCallback(metric_name=["accuracy", "latency"])
    
    ...
    
    study = optuna.create_study(directions=["minimize", "minimize"])
    study.optimize(objective, n_trials=100, callbacks=[mlflc])
    

    image

    See #2835 and #2863 for more details.

    Breaking Changes

    • Align CLI output format (#2882)
      • In particular, the return format of optuna ask has been simplified. The first layer of nesting with the key “trial” is removed. Parsing can be simplified from jq ‘.trial.params’ to jq ‘.params’.

    New Features

    • Support multi-objective optimization in WeightsAndBiasesCallback (#2835, thanks @xadrianzetx!)
    • Introduce trials CLI (#2847)
    • Support multi-objective optimization in MLflowCallback (#2863, thanks @xadrianzetx!)

    Enhancements

    • Add Plotly-like interpolation algorithm to optuna.visualization.matplotlib.plot_contour (#2810, thanks @xadrianzetx!)
    • Sort values when the categorical values is numerical in plot_parallel_coordinate (#2821, thanks @TakuyaInoue-github!)
    • Refactor MLflowCallback (#2855, thanks @xadrianzetx!)
    • Minor refactoring of plot_parallel_coordinate (#2856)
    • Update sklearn.py (#2966, thanks @Garve!)

    Bug Fixes

    • Fix datetime_complete in _CachedStorage (#2846)
    • Hyperband no longer assumes it is the only pruner (#2879, thanks @cowwoc!)
    • Fix method untransform of _SearchSpaceTransform with distribution.single() == True (#2947, thanks @yoshinobc!)

    Installation

    • Avoid keras 2.6.0 (#2851)
    • Drop tensorflow and keras version constraints (#2852)
    • Avoid latest allennlp==2.7.0 (#2894)
    • Introduce the version constraint of scikit-learn (#2953)

    Documentation

    • Fix bounds' shape in the document (#2830)
    • Simplify documentation of FrozenTrial (#2833)
    • Fix typo: replace CirclCI with CircleCI (#2840)
    • Added alternative callback function #2844 (#2845, thanks @DeviousLab!)
    • Update URL of cmaes repository (#2857)
    • Improve the docstring of MLflowCallback (#2883)
    • Fix create_trial document (#2888)
    • Fix an argument in docstring of _CachedStorage (#2917)
    • Use :obj: for True, False, and None instead of inline code (#2922)
    • Use inline code syntax for constraints_func (#2930)
    • Add link to Weights & Biases example (#2962, thanks @xadrianzetx!)

    Examples

    • Do not use latest keras==2.6.0 (https://github.com/optuna/optuna-examples/pull/44)
    • Fix typo in Dask-ML GitHub Action workflow (https://github.com/optuna/optuna-examples/pull/45, thanks @jrbourbeau!)
    • Support Python 3.9 for TensorFlow and MLFlow (https://github.com/optuna/optuna-examples/pull/47)
    • Replace deprecated argument lr with learning_rate in tf.keras (https://github.com/optuna/optuna-examples/pull/51)
    • Avoid latest allennlp==2.7.0 (https://github.com/optuna/optuna-examples/pull/52)
    • Save checkpoint to tmpfile and rename it (https://github.com/optuna/optuna-examples/pull/53)
    • PyTorch checkpoint cosmetics (https://github.com/optuna/optuna-examples/pull/54)
    • Add Weights & Biases example (https://github.com/optuna/optuna-examples/pull/55, thanks @xadrianzetx!)
    • Use MLflowCallback in MLflow example (https://github.com/optuna/optuna-examples/pull/58, thanks @xadrianzetx!)

    Tests

    • Fixed relational operator not including 1 (#2865, thanks @Yu212!)
    • Add scenario tests for samplers (#2869)
    • Add test cases for storage upgrade (#2890)
    • Add test cases for show_progress_bar of optimize (#2900, thanks @xadrianzetx!)
    • Speed-up sampler tests by using random sampling of skopt (#2910)
    • Fixes namedtuple type name (#2961, thanks @sobolevn!)

    Code Fixes

    • Changed y-axis and x-axis access according to matplotlib docs (#2834, thanks @01-vyom!)
    • Fix a BoTorch deprecation warning (#2861)
    • Relax metric name type hinting in WeightsAndBiasesCallback (#2884, thanks @xadrianzetx!)
    • Fix recent alembic 1.7.0 type hint error (#2887)
    • Remove old unused Trial._after_func method (#2899)
    • Fixes namedtuple type name (#2961, thanks @sobolevn!)

    Continuous Integration

    • Enable act to run for other workflows (#2656)
    • Drop tensorflow and keras version constraints (#2852)
    • Avoid segmentation fault of test_lightgbm.py on macOS (#2896)

    Other

    • Preinstall RDB binding Python libraries in Docker image (#2818)
    • Bump to v2.10.0.dev (#2829)
    • Bump to v2.10.0 (#2975)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @01-vyom, @Crissman, @DeviousLab, @Garve, @HideakiImamura, @TakuyaInoue-github, @Yu212, @c-bata, @cowwoc, @himkt, @hvy, @jrbourbeau, @keisuke-umezawa, @not522, @nzw0301, @sobolevn, @toshihikoyanase, @xadrianzetx, @yoshinobc

    Source code(tar.gz)
    Source code(zip)
  • v2.9.1(Aug 3, 2021)

    This is the release note of v2.9.1.

    Highlights

    Ask-and-Tell CLI Fix

    The storage URI and the study name are no longer logged by optuna ask and optuna tell. The former could contain sensitive information.

    Enhancements

    • Remove storage URI from ask and tell CLI subcommands (#2838)

    Other

    • Bump to v2.9.1 (#2839)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @himkt, @hvy, @not522

    Source code(tar.gz)
    Source code(zip)
  • v2.9.0(Aug 2, 2021)

    This is the release note of v2.9.0.

    Help us create the next version of Optuna! Please take a few minutes to fill in this survey, and let us know how you use Optuna now and what improvements you'd like. https://forms.gle/TtJuuaqFqtjmbCP67

    Highlights

    Ask-and-Tell CLI: Optuna from the Command Line

    The built-in CLI which you can use to upgrade storages or check the installed version with optuna --version, now provides experimental subcommands for the Ask-and-Tell interface. It is now possible to optimize using Optuna entirely from the CLI, without writing a single line of Python.

    Ask with optuna ask

    Ask for parameters using optuna ask, specifying the search space, storage, study name, sampler and optimization direction. The parameters and the associated trial number can be output as either JSON or YAML.

    The following is an example outputting and piping the results to a YAML file.

    $ optuna ask --storage sqlite:///mystorage.db \
        --study-name mystudy \
        --sampler TPESampler \
        --sampler-kwargs '{"multivariate": true}' \
        --search-space '{"x": {"name": "UniformDistribution", "attributes": {"low": 0.0, "high": 1.0}}, "y": {"name": "CategoricalDistribution", "attributes": {"choices": ["foo", "bar"]}}}' \
        --direction minimize \
        --out yaml \
        > out.yaml
    [I 2021-07-30 15:56:50,774] A new study created in RDB with name: mystudy
    [I 2021-07-30 15:56:50,808] Asked trial 0 with parameters {'x': 0.21492964898919975, 'y': 'foo'} in study 'mystudy' and storage 'sqlite:///mystorage.db'.
    
    $ cat out.yaml
    trial:
      number: 0
      params:
        x: 0.21492964898919975
        y: foo
    

    Specify multiple whitespace separated directions for multi-objective optimization.

    Tell with optuna tell

    After computing the objective value based on the output of ask, you can report the result back using optuna tell and it will be stored in the study.

    $ optuna tell --storage sqlite:///mystorage.db \
        --study-name mystudy \
        --trial-number 0 \
        --values 1.0
    [I 2021-07-30 16:01:13,039] Told trial 0 with values [1.0] and state TrialState.COMPLETE in study 'mystudy' and storage 'sqlite:///mystorage.db'.
    

    Specify multiple whitespace separated values for multi-objective optimization.

    See https://github.com/optuna/optuna/pull/2817 for details.

    Weights & Biases Integration

    WeightsAndBiasesCallback is a new study optimization callback that allows logging with Weights & Biases. This allows utilizing Weight & Biases’ rich visualization features to analyze studies to complement Optuna’s visualization.

    import optuna
    from optuna.integration.wandb import WeightsAndBiasesCallback
    
    def objective(trial):
        x = trial.suggest_float("x", -10, 10)
    
        return (x - 2) ** 2
    
    wandb_kwargs = {"project": "my-project"}
    wandbc = WeightsAndBiasesCallback(wandb_kwargs=wandb_kwargs)
    study = optuna.create_study(study_name="mystudy")
    study.optimize(objective, n_trials=10, callbacks=[wandbc])
    

    See https://github.com/optuna/optuna/pull/2781 for details.

    TPE Sampler Refactorings

    The Tree-structured Parzen Estimator (TPE) sampler has always been the default sampler in Optuna. Both it’s API and internal code has over time grown to accomodate for various needs such as independent and join parameter sampling (the multivariate parameter) , and multi-objective optimization (the MOTPESampler sampler). In this release, the TPE sampler has been refactored and its code greatly reduced. The previously experimental multi-objective TPE Sampler MOTPESampler has also been deprecated and its capabilities are now absorbed by the standard TPESampler.

    This change may break code that depends on fixed seeds with this sampler. The optimization algorithms otherwise have not been changed.

    Following demonstrates how you can now use the TPESampler for multi-objective optimization.

    import optuna
    
    def objective(trial):
        x = trial.suggest_float("x", 0, 5)
        y = trial.suggest_float("y", 0, 3)
    
        v0 = 4 * x ** 2 + 4 * y ** 2
        v1 = (x - 5) ** 2 + (y - 5) ** 2
    
        return v0, v1
    
    sampler = optuna.samplers.TPESampler()  # `MOTPESampler` used to be required for multi-objective optimization.
    study = optuna.create_study(
        directions=["minimize", "minimize"],
        sampler=sampler,
    )
    study.optimize(objective, n_trials=100)
    

    Note that omitting the sampler argument or specifying None currently defaults to the NSGAIISampler for multi-objective studies instead of the TPESampler.

    See https://github.com/optuna/optuna/pull/2618 for details.

    Breaking Changes

    • Unify the univariate and multivariate TPE (#2618)

    New Features

    • MLFlow decorator for optimization function (#2670, thanks @lucafurrer!)
    • Redis Heartbeat (#2780, thanks @Turakar!)
    • Introduce Weights & Biases integration (#2781, thanks @xadrianzetx!)
    • Function for failing zombie trials and invoke their callbacks (#2811)
    • Optuna ask and tell CLI options (#2817)

    Enhancements

    • Unify MOTPESampler and TPESampler (#2688)
    • Changed interpolation type to make numeric range consistent with Plotly (#2712, thanks @01-vyom!)
    • Add the warning if an intermediate value is already reported at the same step (#2782, thanks @TakuyaInoue-github!)
    • Prioritize grids that are not yet running in GridSampler (#2783)
    • Fix warn_independent_sampling in TPESampler (#2786)
    • Avoid applying constraint_fn to non-COMPLETE trials in NSGAII-sampler (#2791)
    • Speed up TPESampler (#2816)
    • Enable CLI helps for subcommands (#2823)

    Bug Fixes

    • Fix AllenNLPExecutor reproducibility (#2717, thanks @MagiaSN!)
    • Use repr and eval to restore pruner parameters in AllenNLP integration (#2731)
    • Fix Nan cast bug in TPESampler (#2739)
    • Fix infer_relative_search_space of TPE with the single point distributions (#2749)

    Installation

    • Avoid latest numpy 1.21 (#2766)
    • Fix numpy 1.21 related mypy errors (#2767)

    Documentation

    • Add how to suggest proportion to FAQ (#2718)
    • Explain how to add a user's own logging callback function (#2730)
    • Add copy_study to the docs (#2737)
    • Fix link to kurobako benchmark page (#2748)
    • Improve docs of constant liar (#2785)
    • Fix the document of RetryFailedTrialCallback.retried_trial_number (#2789)
    • Match the case of ID (#2798, thanks @belldandyxtq!)
    • Rephrase RDBStorage RuntimeError description (#2802, thanks @belldandyxtq!)

    Examples

    • Add remaining examples to CI tests (https://github.com/optuna/optuna-examples/pull/26)
    • Use hydra 1.1.0 syntax (https://github.com/optuna/optuna-examples/pull/28)
    • Replace monitor value with accuracy (https://github.com/optuna/optuna-examples/pull/32)

    Tests

    • Count the number of calls of the wrapped method in the test of MOTPEMultiObjectiveSampler (#2666)
    • Add specific test cases for visualization.matplotlib.plot_intermediate_values (#2754, thanks @asquare100!)
    • Added unit tests for optimization history of matplotlib tests (#2761, thanks @01-vyom!)
    • Changed unit tests for pareto front of matplotlib tests (#2763, thanks @01-vyom!)
    • Added unit tests for slice of matplotlib tests (#2764, thanks @01-vyom!)
    • Added unit tests for param importances of matplotlib tests (#2774, thanks @01-vyom!)
    • Changed unit tests for parallel coordinate of matplotlib tests (#2778, thanks @01-vyom!)
    • Use more specific assert in tests/visualization_tests/matplotlib/test_intermediate_plot.py (#2803)
    • Added unit tests for contour of matplotlib tests (#2806, thanks @01-vyom!)

    Code Fixes

    • Create study directory (#2721)
    • Dissect allennlp integration in submodules based on roles (#2745)
    • Fix deprecated version of MOTPESampler (#2770)

    Continuous Integration

    • Daily CI of Checks (#2760)
    • Use default resolver in CI's pip installation (#2779)

    Other

    • Bump up version to v2.9.0dev (#2723)
    • Add an optional section to ask reproducible codes (#2799)
    • Add survey news to README.md (#2801)
    • Add python code to issue templates for making reporting runtime information easy (#2805)
    • Bump to v2.9.0 (#2828)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @ytsmiling, @harupy, @asquare100, @hvy, @c-bata, @nzw0301, @lucafurrer, @belldandyxtq, @not522, @TakuyaInoue-github, @01-vyom, @himkt, @Crissman, @toshihikoyanase, @sile, @vanpelt, @HideakiImamura, @MagiaSN, @keisuke-umezawa, @Turakar, @xadrianzetx, @ytsmiling, @harupy, @asquare100, @hvy, @c-bata, @nzw0301, @lucafurrer, @belldandyxtq, @not522, @TakuyaInoue-github, @01-vyom, @himkt, @Crissman, @toshihikoyanase, @sile, @vanpelt, @HideakiImamura, @MagiaSN, @keisuke-umezawa, @Turakar, @xadrianzetx

    Source code(tar.gz)
    Source code(zip)
  • v2.8.0(Jun 7, 2021)

    This is the release note of v2.8.0.

    New Examples Repository

    The number of Optuna examples has grown as the number of integrations have increased, and we’ve moved them to their own repository: optuna/optuna-examples.

    Highlights

    TPE Sampler Improvements

    Constant Liar for Distributed Optimization

    In distributed environments, the TPE sampler may sample many points in a small neighborhood, because it does not have knowledge that other trials running in parallel are sampling nearby. To avoid this issue, we’ve implemented the Constant Liar (CL) heuristic to return a poor value for trials which have started but are not yet complete, to reduce search effort.

    study = optuna.create_study(sampler=optuna.samplers.TPESampler(constant_liar=True))
    

    The following history plots demonstrate how optimization can be improved using this feature. Ten parallel workers are simultaneously trying to optimize the same function which takes about one second to compute. The first plot has constant_liar=False, and the second with constant_liar=True, uses the Constant Liar feature. We can see that with Constant Liar, the sampler does a better job of assigning different parameter configurations to different trials and converging faster.

    tpe_without_constant_liar_edit_v2

    See #2664 for details.

    Tree-structured Search Space Support

    The TPE sampler with multivariate=True now supports tree-structured search spaces. Previously, if the user split the search space with an if-else statement, as shown below, the TPE sampler with multivariate=True would fall back to random sampling. Now, if you set multivariate=True and group=True, the TPE sampler algorithm will be applied to each partitioned search space to perform efficient sampling.

    See #2526 for more details.

    def objective(trial):
        classifier_name = trial.suggest_categorical("classifier", ["SVC", "RandomForest"])
    
        if classifier_name == "SVC":
            # If `multivariate=True` and `group=True`, the following 2 parameters are sampled jointly by TPE.
            svc_c = trial.suggest_float("svc_c", 1e-10, 1e10, log=True)
            svc_kernel = trial.suggest_categorical("kernel", ["linear", "rbf", "sigmoid"])
    
            classifier_obj = sklearn.svm.SVC(C=svc_c, kernel=svc_kernel)
        else:
            # If `multivariate=True` and `group=True`, the following 3 parameters are sampled jointly by TPE.
            rf_n_estimators = trial.suggest_int("rf_n_estimators", 1, 20)
            rf_criterion = trial.suggest_categorical("rf_criterion", ["gini", "entropy"])
            rf_max_depth = trial.suggest_int("rf_max_depth", 2, 32, log=True)
    
            classifier_obj = sklearn.ensemble.RandomForestClassifier(n_estimators=rf_n_estimators, criterion=rf_criterion, max_depth=rf_max_depth)
    
        ... 
    
    sampler = optuna.samplers.TPESampler(multivariate=True, group=True)
    

    Copying Studies

    Studies can now be copied across storages. The trial history as well as Study.user_attrs and Study.system_attrs are preserved.

    For instance, this allows dumping a study in an MySQL RDBStorage into an SQLite file. Serializing the study this way, it can be shared with other users who are unable to access the original storage.

    study = optuna.create_study(
        study_name=”my-study”, storage=”mysql+pymysql://[email protected]/optuna"
    )
    study.optimize(..., n_trials=100)
    
    # Creates a copy of the study “my-study” in an MySQL `RDBStorage` to a local file named `optuna.db`.
    optuna.copy_study(
        from_study_name="my-study",
        from_storage="mysql+pymysql://[email protected]/optuna",
        to_storage="sqlite:///optuna.db",
    )
    
    study = optuna.load_study(study_name=”my-study”, storage=”sqlite:///optuna.db”)
    assert len(study.trials) >= 100
    

    See #2607 for details.

    Callbacks

    optuna.storages.RetryFailedTrialCallback Added

    Used as a callback in RDBStorage, this allows a previously pre-empted or otherwise aborted trials that are detected by a failed heartbeat to be re-run.

    storage = optuna.storages.RDBStorage(
        url="sqlite:///:memory:",
        heartbeat_interval=60,
        grace_period=120,
        failed_trial_callback=optuna.storages.RetryFailedTrialCallback(max_retry=3),
    )
    study = optuna.create_study(storage=storage)
    

    See #2694 for details.

    optuna.study.MaxTrialsCallback Added

    Used as a callback in study.optimize, this allows setting of a maximum number of trials of a particular state, such as setting the maximum number of failed trials, before stopping the optimization.

    study.optimize(
        objective,
        callbacks=[optuna.study.MaxTrialsCallback(10, states=(optuna.trial.TrialState.COMPLETE,))],
    )
    

    See #2636 for details.

    Breaking Changes

    • Allow None as study_name when there is only a single study in load_study (#2608)
    • Relax GridSampler allowing not-contained parameters during suggest_* (#2663)

    New Features

    • Make LightGBMTuner and LightGBMTunerCV reproducible (#2431, thanks @tetsuoh0103!)
    • Add visualization.matplotlib.plot_pareto_front (#2450, thanks @tohmae!)
    • Support a group decomposed search space and apply it to TPE (#2526)
    • Add __str__ for samplers (#2539)
    • Add n_min_trials argument for PercentilePruner and MedianPruner (#2556)
    • Copy study (#2607)
    • Allow None as study_name when there is only a single study in load_study (#2608)
    • Add MaxTrialsCallback class to enable stopping after fixed number of trials (#2612)
    • Implement PatientPruner (#2636)
    • Support multi-objective optimization in CLI (optuna create-study) (#2640)
    • Constant liar for TPESampler (#2664)
    • Add automatic retry callback (#2694)
    • Sorts categorical values on axis that contains only numerical values in visualization.matplotlib.plot_slice (#2709, thanks @Muktan!)

    Enhancements

    • PyTorchLightningPruningCallback to warn when an evaluation metric does not exist (#2157, thanks @bigbird555!)
    • Pareto front visualization to visualize study progress with color scales (#2563)
    • Sort categorical values on axis that contains only numerical values in visualization.plot_contour (#2569)
    • Improve param_importances (#2576)
    • Sort categorical values on axis that contains only numerical values in visualization.matplotlib.plot_contour (#2593)
    • Show legend of optuna.visualization.matplotlib.plot_edf (#2603)
    • Show legend of optuna.visualization.matplotlib.plot_intermediate_values (#2606)
    • Make MOTPEMultiObjectiveSampler a thin wrapper for MOTPESampler (#2615)
    • Do not wait for next heartbeat on study completion (#2686, thanks @Turakar!)
    • Change colour scale of contour plot by matplotlib for consistency with plotly results (#2711, thanks @01-vyom!)

    Bug Fixes

    • Add type conversion for reference point and solution set (#2584)
    • Fix contour plot with multi-objective study and target being specified (#2589)
    • Fix distribution's _contains (#2652)
    • Read environment variables in dump_best_config (#2681)
    • Update version info entry on RDB storage upgrade (#2687)
    • Fix results not reproducible when running AllenNLPExecutor multiple t… (Backport of #2717) (#2728)

    Installation

    • Replace sklearn constraint (#2634)
    • Add constraint of Sphinx version (#2657)
    • Add click==7.1.2 to GitHub workflows to solve AllenNLP import error (#2665)
    • Avoid tensorflow 2.5.0 (#2674)
    • Remove example from setup.py (#2676)

    Documentation

    • Add example to optuna.logging.disable_propagation (#2477, thanks @jeromepatel!)
    • Add documentation for hyperparameter importance target parameter (#2551)
    • Remove the news section in README.md (#2586)
    • Documentation updates to CmaEsSampler (#2591, thanks @turian!)
    • Rename ray-joblib.py to snakecase with underscores (#2594)
    • Replace If with if in a sentence (#2602)
    • Use CmaEsSampler instead of TPESampler in the batch optimization example (#2610)
    • README fixes (#2617, thanks @Scitator!)
    • Remove wrong returns description in docstring (#2619)
    • Improve document on BoTorchSampler page (#2631)
    • Add the missing colon (#2661)
    • Add missing parameter WAITING details in docstring (#2683, thanks @jeromepatel!)
    • Update URLs to optuna-examples (#2684)
    • Fix indents in the ask-and-tell tutorial (#2690)
    • Join sampler examples in README.md (#2692)
    • Fix typo in the tutorial (#2704)
    • Update command for installing auto-formatters (#2710, thanks @01-vyom!)
    • Some edits for CONTRIBUTING.md (#2719)

    Examples

    • Split GitHub Actions workflows (https://github.com/optuna/optuna-examples/pull/1)
    • Cherry pick #2611 of optuna/optuna (https://github.com/optuna/optuna-examples/pull/2)
    • Add checks workflow (https://github.com/optuna/optuna-examples/pull/5)
    • Add MaxTrialsCallback class to enable stopping after fixed number of trials (https://github.com/optuna/optuna-examples/pull/9)
    • Update README.md (https://github.com/optuna/optuna-examples/pull/10)
    • Add an example of warm starting CMA-ES (https://github.com/optuna/optuna-examples/pull/11, thanks @nmasahiro!)
    • Replace old links to example files (https://github.com/optuna/optuna-examples/pull/12)
    • Avoid tensorflow 2.5.0 (https://github.com/optuna/optuna-examples/pull/13)
    • Avoid tensorflow 2.5 (https://github.com/optuna/optuna-examples/pull/15)
    • Test multi_objective in CI (https://github.com/optuna/optuna-examples/pull/16)
    • Use only one GPU for PyTorch Lightning example by default (https://github.com/optuna/optuna-examples/pull/17)
    • Remove example of CatBoost in pruning section (https://github.com/optuna/optuna-examples/pull/18, #2702)
    • Add issues and pull request templates (https://github.com/optuna/optuna-examples/pull/20)
    • Add CONTRIBUTING.md file ((https://github.com/optuna/optuna-examples/pull/21)
    • Change PR approvers from two to one (https://github.com/optuna/optuna-examples/pull/22)
    • Improved search space XGBoost (#2346, thanks @jeromepatel!)
    • Remove n_jobs for study.optimize in examples/ (#2588, thanks @jeromepatel!)
    • Using the "log" key is deprecated in pytorch_lightning (#2611, thanks @sushi30!)
    • Move examples to a new repository (#2655)
    • Remove remaining examples (#2675)
    • optuna-examples (https://github.com/optuna/optuna-examples/pull/11 follow up (#2689)

    Tests

    • Remove assertions for supported dimensions from test_plot_pareto_front_unsupported_dimensions (#2578)
    • Update a test function of matplotliv.plot_pareto_front for consistency (#2583)
    • Add deterministic parameter to make LightGBM training reproducible (#2623)
    • Add force_col_wise parameter of LightGBM in test cases of LightGBMTuner and LightGBMTunerCV (#2630, thanks @tetsuoh0103!)
    • Remove CudaCallback from the fastai test (#2641)
    • Add test cases in optuna/visualization/matplotlib/edf.py (#2642)
    • Refactor a unittest in test_median.py (#2644)
    • Refactor pruners_test (#2691, thanks @tsumli!)

    Code Fixes

    • Remove redundant lines in CI settings of examples (#2554)
    • Remove the unused argument of functions in matplotlib.contour (#2571)
    • Fix axis labels of optuna.visualization.matplotlib.plot_pareto_front when axis_order is specified (#2577)
    • Remove list casts (#2601)
    • Remove _get_distribution from visualization/matplotlib/_param_importances.py (#2604)
    • Fix grammatical error in failure message (#2609, thanks @agarwalrounak!)
    • Separate MOTPESampler from TPESampler (#2616)
    • Simplify add_distributions in _SearchSpaceGroup (#2651)
    • Replace old example URLs in optuna.integrations (#2700)

    Continuous Integration

    • Supporting Python 3.9 with integration modules and optional dependencies (#2530, thanks @0x41head!)
    • Upgrade pip in PyPI and Test PyPI workflow (#2598)
    • Fix PyPI publish workflow (#2624)
    • Introduce speed benchmarks using asv (#2673)

    Other

    • Bump master version to 2.8.0dev (#2562)
    • Upload to TestPyPI at the time of release as well (#2573)
    • Install blackdoc in formats.sh (#2637)
    • Use command to check the existence of the libraries to avoid partially matching (#2653)
    • Add an example section to the README (#2667)
    • Fix formatting in contribution guidelines (#2668)
    • Update CONTRIBUTING.md with optuna-examples (#2669)

    Thanks to All the Contributors!

    This release was made possible by the authors and the people who participated in the reviews and discussions.

    @toshihikoyanase, @himkt, @Scitator, @tohmae, @crcrpar, @c-bata, @01-vyom, @sushi30, @tsumli, @not522, @tetsuoh0103, @jeromepatel, @bigbird555, @hvy, @g-votte, @nzw0301, @turian, @Crissman, @sile, @agarwalrounak, @Muktan, @Turakar, @HideakiImamura, @keisuke-umezawa, @0x41head, @toshihikoyanase, @himkt, @Scitator, @tohmae, @crcrpar, @c-bata, @01-vyom, @sushi30, @tsumli, @not522, @tetsuoh0103, @jeromepatel, @bigbird555, @hvy, @g-votte, @nzw0301, @turian, @nmasahiro, @Crissman, @sile, @agarwalrounak, @Muktan, @Turakar, @HideakiImamura, @keisuke-umezawa, @0x41head

    Source code(tar.gz)
    Source code(zip)
  • v2.7.0(Apr 5, 2021)

    This is the release note for v2.7.0.

    Highlights

    New optuna-dashboard Repository

    A new dashboard optuna-dashboard is being developed in a separate repository under the Optuna organization. Install it with pip install optuna-dashboard and run it with optuna-dashboard $STORAGE_URL. The previous optuna dashboard command is now deprecated.

    Deprecate n_jobs Argument of Study.optimize

    The GIL has been an issue when using the n_jobs argument for multi-threaded optimization. We decided to deprecate this option in favor of the more stable process-level parallelization. Details available in the tutorial. Users who have been parallelizing at the thread level using the n_jobs argument are encouraged to refer to the tutorial for process-level parallelization.

    If the objective function is not affected by the GIL, thread-level parallelism may be useful. You can achieve thread-level parallelization in the following way.

    with ThreadPoolExecutor(max_workers=5) as executor:
        for _ in range(5):
            executor.submit(study.optimize, objective, 100)
    

    New Tutorial and Examples

    Tutorial pages about the usage of the ask-and-tell interface (#2422) and best_trial (#2427) have been added, as well as an example that demonstrates parallel optimization using Ray (#2298) and an example to explain how to stop the optimization based on the number of completed trials instead of the total number of trials (#2449).

    Improved Code Quality

    The code quality was improved in terms of bug fixes, third party library support, and platform support.

    For instance, the bugs on warm starting CMA-ES and visualization.matplotlib.plot_optimization_history were resolved by #2501 and #2532, respectively.

    Third party libraries such as PyTorch, fastai, and AllenNLP were updated. We have updated the corresponding integration modules and examples for the new versions. See #2442, #2550 and #2528 for details.

    From this version, we are expanding the platform support. Previously, changes were tested in Linux containers. Now, we also test changes merged into the master branch in macOS containers as well (#2461).

    Breaking Changes

    • Deprecate dashboard (#2559)
    • Deprecate n_jobs in Study.optimize (#2560)

    New Features

    • Support object representation of StudyDirection for create_study arguments (#2516)

    Enhancements

    • Change caching implementation of MOTPE (#2406, thanks @y0z!)
    • Fix to replace numpy.append (#2419, thanks @nyanhi!)
    • Modify after_trial for NSGAIISampler (#2436, thanks @jeromepatel!)
    • Print a URL of a related release note in the warning message (#2496)
    • Add log-linear algorithm for 2d Pareto front (#2503, thanks @parsiad!)
    • Concatenate the argument text after the deprecation warning (#2558)

    Bug Fixes

    • Use 2.0 style delete API of SQLAlchemy (#2487)
    • Fix Warm Starting CMA-ES with a maximize direction (#2501)
    • Fix visualization.matplotlib.plot_optimization_history for multi-objective (#2532)

    Installation

    • Bump torch to 1.8.0 (#2442)
    • Remove Cython from install_requires (#2466)
    • Fix Cython installation for Python 3.9 (#2474)
    • Avoid catalyst 21.3 (#2480, thanks @crcrpar!)

    Documentation

    • Add ask and tell interface tutorial (#2422)
    • Add tutorial for re-use of the best_trial (#2427)
    • Add explanation for get_storage in the API reference (#2430)
    • Follow-up of the user-defined pruner tutorial (#2446)
    • Add a new example max_trial_callback to optuna/examples (#2449, thanks @jeromepatel!)
    • Standardize on 'hyperparameter' usage (#2460)
    • Replace MNIST with Fashion MNIST in multi-objective optimization tutorial (#2468)
    • Fix links on SuccessiveHalvingPruner page (#2489)
    • Swap the order of load_if_exists and directions for consistency (#2491)
    • Clarify n_jobs for OptunaSearchCV (#2545)
    • Mention the paper is in Japanese (#2547, thanks @crcrpar!)
    • Fix typo of the paper's author name (#2552)

    Examples

    • Add an example of Ray with joblib backend (#2298)
    • Added RL and Multi-Objective examples to examples/README.md (#2432, thanks @jeromepatel!)
    • Replace sh with bash in README of kubernetes examples (#2440)
    • Apply #2438 to pytorch examples (#2453, thanks @crcrpar!)
    • More Examples Folders after #2302 (#2458, thanks @crcrpar!)
    • Apply urllib patch for MNIST download (#2459, thanks @crcrpar!)
    • Update Dockerfile of MLflow Kubernetes examples (#2472, thanks @0x41head!)
    • Replace Optuna's Catalyst pruning callback with Catalyst's Optuna pruning callback (#2485, thanks @crcrpar!)
    • Use whitespace tokenizer instead of spacy tokenizer (#2494)
    • Use Fashion MNIST in example (#2505, thanks @crcrpar!)
    • Update pytorch_lightning_distributed.py to remove MNIST and PyTorch Lightning errors (#2514, thanks @0x41head!)
    • Use OptunaPruningCallback in catalyst_simple.py (#2546, thanks @crcrpar!)
    • Support fastai 2.3.0 (#2550)

    Tests

    • Add MOTPESampler in parametrize_multi_objective_sampler (#2448)
    • Extract test cases regarding Pareto front to test_multi_objective.py (#2525)

    Code Fixes

    • Fix mypy errors produced by numpy==1.20.0 (#2300, thanks @0x41head!)
    • Simplify the code to find best values (#2394)
    • Use _SearchSpaceTransform in RandomSampler (#2410, thanks @sfujiwara!)
    • Set the default value of state of create_trial as COMPLETE (#2429)

    Continuous Integration

    • Run TensorFlow related examples on Python3.8 (#2368, thanks @crcrpar!)
    • Use legacy resolver in CI's pip installation (#2434, thanks @crcrpar!)
    • Run tests and integration tests on Mac & Python3.7 (#2461, thanks @crcrpar!)
    • Run Dask ML example on Python3.8 (#2499, thanks @crcrpar!)
    • Install OpenBLAS for mxnet1.8.0 (#2508, thanks @crcrpar!)
    • Add ray to requirements (#2519, thanks @crcrpar!)
    • Upgrade AllenNLP to v2.2.0 (#2528)
    • Add Coverage for ChainerMN in codecov (#2535, thanks @jeromepatel!)
    • Skip fastai2.3 tentatively (#2548, thanks @crcrpar!)

    Other

    • Add -f option to make clean command idempotent (#2439)
    • Bump master version to 2.7.0dev (#2444)
    • Document how to write a new tutorial in CONTRIBUTING.md (#2463, thanks @crcrpar!)
    • Bump up version number to 2.7.0 (#2561)

    Thanks to All the Contributors!

    This release was made possible by authors, and everyone who participated in reviews and discussions.

    @0x41head, @AmeerHajAli, @Crissman, @HideakiImamura, @c-bata, @crcrpar, @g-votte, @himkt, @hvy, @jeromepatel, @keisuke-umezawa, @not522, @nyanhi, @nzw0301, @parsiad, @sfujiwara, @sile, @toshihikoyanase, @y0z

    Source code(tar.gz)
    Source code(zip)
  • v2.6.0(Mar 8, 2021)

    This is the release note of v2.6.0.

    Highlights

    Warm Starting CMA-ES and sep-CMA-ES Support

    Two new CMA-ES variants are available. Warm starting CMA-ES enables transferring prior knowledge on similar tasks. More specifically, CMA-ES can be initialized based on existing results of similar tasks. sep-CMA-ES is an algorithm which constrains the covariance matrix to be diagonal and is suitable for separable objective functions. See #2307 and #1951 for more details.

    Example of Warm starting CMA-ES:

    study = optuna.load_study(storage=”...”, study_name=”existing-study”)
    
    study.sampler = CmaEsSampler(source_trials=study.trials)
    study.optimize(objective, n_trials=100)
    

    result

    Example of sep-CMA-ES:

    study = optuna.create_study(sampler=CmaEsSampler(use_separable_cma=True))
    study.optimize(objective, n_trials=100)
    

    68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f6b75726f62616b6f2d7265706f7274732f43796265724167656e742f636d6165732f7369782d68756d702d63616d656c2d33633661303738636666656333366262316130316264353965623539306133633037396237616334

    PyTorch Distributed Data Parallel

    Hyperparameter optimization for distributed neural-network training using PyTorch Distributed Data Parallel is supported. A new integration moduleTorchDistributedTrial, synchronizes the hyperparameters among all nodes. See #2303 for further details.

    Example:

    def objective(trial):
        distributed_trial = optuna.integration.TorchDistributedTrial(trial)
        lr = distributed_trial.suggest_float("lr", 1e-5, 1e-1, log=True)
        …
    

    RDBStorage Improvements

    The RDBStorage now allows longer user and system attributes, as well as choices for categorical distributions (e.g. choices spanning thousands of bytes/characters) to be persisted. Corresponding column data types of the underlying SQL tables have been changed from VARCHAR to TEXT. If you want to upgrade from an older version of Optuna and keep using the same storage, please migrate your tables as follows. Please make sure to create any backups before the migration and note that databases that don’t support TEXT will not work with this release.

    # Alter table columns from `VARCHAR` to `TEXT` to allow storing larger data.
    optuna storage upgrade --storage <storage URL>
    

    For more details, see #2395.

    Heartbeat Improvements

    The heartbeat feature was introduced in v2.5.0 to automatically mark stale trials as failed. It is now possible to not only fail the trials but also execute user-specified callback functions to process the failed trials. See #2347 for more details.

    Example:

    def objective(trial):
        …  # Very time-consuming computation.
    
    # Adding a failed trial to the trial queue.
    def failed_trial_callback(study, trial):
        study.add_trial(
            optuna.create_trial(
                state=optuna.trial.TrialState.WAITING,
                params=trial.params,
                distributions=trial.distributions,
                user_attrs=trial.user_attrs,
                system_attrs=trial.system_attrs,
            )
        )
    
    storage = optuna.storages.RDBStorage(
        url=..., 
        heartbeat_interval=60, 
        grace_period=120, 
        failed_trial_callback=failed_trial_callback,
    )
    study = optuna.create_study(storage=storage)
    study.optimize(objective, n_trials=100)
    
    

    Pre-defined Search Space with Ask-and-tell Interface

    The ask-and-tell interface allows specifying pre-defined search spaces through the new fixed_distributions argument. This option will keep the code short when the search space is known beforehand. It replaces calls to Trial.suggest_…. See #2271 for more details.

    study = optuna.create_study()
    
    # For example, the distributions are previously defined when using `create_trial`.
    distributions = {
        "optimizer": optuna.distributions.CategoricalDistribution(["adam", "sgd"]),
        "lr": optuna.distributions.LogUniformDistribution(0.0001, 0.1),
    }
    trial = optuna.trial.create_trial(
        params={"optimizer": "adam", "lr": 0.0001},
        distributions=distributions,
        value=0.5,
    )
    study.add_trial(trial)
    
    # You can pass the distributions previously defined.
    trial = study.ask(fixed_distributions=distributions)
    
    # `optimizer` and `lr` are already suggested and accessible with `trial.params`.
    print(trial.params)
    

    Breaking Changes

    RDBStorage data type updates

    Databases must be migrated for storages that were created with earlier versions of Optuna. Please refer to the highlights above.

    For more details, see #2395.

    datatime_start of enqueued trials.

    The datetime_start property of Trial, FrozenTrial and FixedTrial shows when a trial was started. This property may now be None. For trials enqueued with Study.enqueue_trial, the timestamp used to be set to the time of queue. Now, the timestamp is first set to None when enqueued, and later updated to the timestamp when popped from the queue to run. This has implications on StudySummary.datetime_start as well which may be None in case trials have been enqueued but not popped.

    For more details, see #2236.

    joblib internals removed

    joblib was partially supported as a backend for parallel optimization via the n_jobs parameter to Study.optimize. This support has now been removed and internals have been replaced with concurrent.futures.

    For more details, see #2269.

    AllenNLP v2 support

    Optuna now officially supports AllenNLP v2. We also dropped the AllenNLP v0 support and the pruning support for AllenNLP v1. If you want to use AllenNLP v0 or v1 with Optuna, please install Optuna v2.5.0.

    For more details, see #2412.

    New Features

    • Support sep-CMA-ES algorithm (#1951)
    • Add an option to the Study.ask method that allows define-and-run parameter suggestion (#2271)
    • Add integration module for PyTorch Distributed Data Parallel (#2303)
    • Support Warm Starting CMA-ES (#2307)
    • Add callback argument for heartbeat functionality (#2347)
    • Support IntLogUniformDistribution for TensorBoard (#2362, thanks @nzw0301!)

    Enhancements

    • Fix the wrong way to set datetime_start (clean) (#2236, thanks @chenghuzi!)
    • Multi-objective error messages from Study to suggest solutions (#2251)
    • Adds missing LightGBMTuner metrics for the case of higher is better (#2267, thanks @mavillan!)
    • Color Inversion to make contour plots more visually intuitive (#2291, thanks @0x41head!)
    • Close sessions at the end of with-clause in Storage (#2345)
    • Improve "plot_pareto_front" (#2355, thanks @0x41head!)
    • Implement after_trial method in CmaEsSampler (#2359, thanks @jeromepatel!)
    • Convert low and high to float explicitly in distributions (#2360)
    • Add after_trial for PyCmaSampler (#2365, thanks @jeromepatel!)
    • Implement after_trial for BoTorchSampler and SkoptSampler (#2372, thanks @jeromepatel!)
    • Implement after_trial for TPESampler (#2376, thanks @jeromepatel!)
    • Support BoTorch >= 0.4.0 (#2386, thanks @nzw0301!)
    • Mitigate string-length limitation of RDBStorage (#2395)
    • Support AllenNLP v2 (#2412)
    • Implement after_trial for MOTPESampler (#2425, thanks @jeromepatel!)

    Bug Fixes

    • Add test and fix for relative sampling failure in multivariate TPE (#2055, thanks @alexrobomind!)
    • Fix optuna.visualization.plot_contour of subplot case with categorical axes (#2297, thanks @nzw0301!)
    • Only fail trials associated with the current study (#2330)
    • Fix TensorBoard integration for suggest_float (#2335, thanks @nzw0301!)
    • Add type conversions for upper/lower whose values are integers (#2343)
    • Fix improper stopping with the combination of GridSampler and HyperbandPruner (#2353)
    • Fix matplotlib.plot_parallel_coordinate with only one suggested parameter (#2354, thanks @nzw0301!)
    • Create model_dir by _LightGBMBaseTuner (#2366, thanks @nyanhi!)
    • Fix assertion in cached storage for state update (#2370)
    • Use low in _transform_from_uniform for TPE sampler (#2392, thanks @nzw0301!)
    • Remove indices from optuna.visualization.plot_parallel_coordinate with categorical values (#2401, thanks @nzw0301!)

    Installation

    • mypy hotfix voiding latest NumPy 1.20.0 (#2292)
    • Remove jax from setup.py (#2308, thanks @nzw0301!)
    • Install torch from PyPI for ReadTheDocs (#2361)
    • Pin botorch version (#2379)

    Documentation

    • Fix broken links in README.md (#2268)
    • Provide docs/source/tutorial for faster local documentation build (#2277)
    • Remove specification of n_trials from example of GridSampler (#2280)
    • Fix typos and errors in document (#2281, thanks @belldandyxtq!)
    • Add tutorial of multi-objective optimization of neural network with PyTorch (#2305)
    • Add explanation for local verification (#2309)
    • Add sphinx.ext.imgconverter extension (#2323, thanks @KoyamaSohei!)
    • Include high in the documentation of UniformDistribution and LogUniformDistribution (#2348)
    • Fix typo; Replace dimentional with dimensional (#2390, thanks @nzw0301!)
    • Fix outdated docstring of TFKerasPruningCallback (#2399, thanks @sfujiwara!)
    • Call fig.show() in visualization code examples (#2403, thanks @harupy!)
    • Explain the backend of parallelisation (#2428, thanks @nzw0301!)
    • Navigate with left/right arrow keys in the document (#2433, thanks @ydcjeff!)
    • Hotfix for MNIST download in tutorial (#2438)

    Examples

    • Provide a user-defined pruner example (#2140, thanks @tktran!)
    • Add Hydra example (#2290, thanks @nzw0301!)
    • Use trainer.callback_metrics in the Pytorch Lightning example (#2294, thanks @TezRomacH!)
    • Example folders (#2302)
    • Update PL example with typing and DataModule (#2332, thanks @TezRomacH!)
    • Remove unsupported argument from PyTorch Lightning example (#2357)
    • Update examples/kubernetes/mlflow/check_study.sh to match whole words (#2363, thanks @twolffpiggott!)
    • Add PyTorch checkpoint example using failed_trial_callback (#2373)
    • Update Dockerfile of Kubernetes simple example (#2375, thanks @0x41head!)

    Tests

    • Refactor test of GridSampler (#2285)
    • Replace parametrize_storage with StorageSupplier (#2404, thanks @nzw0301!)

    Code Fixes

    • Replace joblib with concurrent.futures for parallel optimization (#2269)
    • Make trials stale only when succeeded to fail (#2284)
    • Apply code-fix to LightGBMTuner (Follow-up #2267) (#2299)
    • Inherit PyTorchLightningPruningCallback from Callback (#2326, thanks @TezRomacH!)
    • Consistently use suggest_float (#2344)
    • Fix typo (#2352, thanks @nzw0301!)
    • Increase API request limit for stale bot (#2369)
    • Fix typo; replace contraints with constraints (#2378, thanks @nzw0301!)
    • Fix typo (#2383, thanks @nzw0301!)
    • Update examples for study.get_trials for states filtering (#2393, thanks @jeromepatel!)
    • Fix - remove arguments of python2 super().__init__ (#2402, thanks @nyanhi!)

    Continuous Integration

    • Turn off RDB tests on circleci (#2255)
    • Allow allennlp in py3.8 integration tests (#2367)
    • Color pytest logs (#2400, thanks @harupy!)
    • Remove -f option from doctest pip installation (#2418)

    Other

    • Bump up version number to v2.6.0.dev (#2283)
    • Enable automatic closing of stale issues and pull requests by github actions (#2287)
    • Add setup section to CONTRIBUTING.md (#2342)
    • Fix the local mypy error on Pytorch Lightning integration (#2349)
    • Update the link to the botorch example (#2377, thanks @nzw0301!)
    • Remove -f option from documentation installation (#2407)

    Thanks to All the Contributors!

    This release was made possible by authors, and everyone who participated in reviews and discussions.

    @0x41head, @Crissman, @HideakiImamura, @KoyamaSohei, @TezRomacH, @alexrobomind, @belldandyxtq, @c-bata, @chenghuzi, @crcrpar, @g-votte, @harupy, @himkt, @hvy, @jeromepatel, @keisuke-umezawa, @mavillan, @not522, @nyanhi, @nzw0301, @sfujiwara, @sile, @tktran, @toshihikoyanase, @twolffpiggott, @ydcjeff, @ytsmiling

    Source code(tar.gz)
    Source code(zip)
  • v2.5.0(Feb 1, 2021)

    This is the release note of v2.5.0.

    Highlights

    Ask-and-Tell

    The ask-and-tell interface is a new complement to Study.optimize. It allows users to construct Trial instances without the need of an objective function callback, giving more flexibility in how to define search spaces, ask for suggested hyperparameters and how to evaluate objective functions. The interface is made out of two methods, Study.ask and Study.tell.

    • Study.ask returns a new Trial object.
    • Study.tell takes either a Trial object or a trial number along with the result of that trial, i.e. a value and/or the state, and saves it. Since Study.tell accepts a trial number, a trial object can be disposed after parameters have been suggested. This allows objective function evaluations on a different thread or process.
    import optuna
    from optuna.trial import TrialState
    
    study = optuna.create_study()
    
    # Use a Python for-loop to iteratively optimize the study.
    for _ in range(100):  
        trial = study.ask()  # `trial` is a `Trial` and not a `FrozenTrial`. 
    
        # Objective function, in this case not as a function but at global scope.
        x = trial.suggest_float("x", -1, 1)
        y = x ** 2
    
        study.tell(trial, y)
    
        # Or, tell by trial number. This is equivalent to `study.tell(trial, y)`.
        # study.tell(trial.number, y)
    
        # Or, prune if the trial seems unpromising. 
        # study.tell(trial, state=TrialState.PRUNED)
    
    assert len(study.trials) == 100
    

    Heartbeat

    Now, Optuna supports monitoring trial heartbeats with RDB storages. For example, if a process running a trial is killed by a scheduler in a cluster environment, Optuna will automatically change the state of the trial that was running on that process to TrialState.FAIL from TrialState.RUNNING.

    # Consider running this script on several processes. 
    import optuna
    
    def objective(trial):
        (Very time-consuming computation)
    
    # Recording heartbeats every 60 seconds.
    # Other processes' trials where more than 120 seconds have passed 
    # since the last heartbeat was recorded will be automatically failed.
    storage = optuna.storages.RDBStorage(url=..., heartbeat_interval=60, grace_period=120)
    study = optuna.create_study(storage=storage)
    study.optimize(objective, n_trials=100)
    

    Constrained NSGA-II

    NSGA-II experimentally supports constrained optimization. Users can introduce constraints with the new constraints_func argument of NSGAIISampler.__init__.

    The following is an example using this argument, a bi-objective version of the knapsack problem. We have 100 pairs of items and two knapsacks, and would like to maximize the profits of items within the weight limitation.

    import numpy as np
    import optuna
    
    # Define bi-objective knapsack problem.
    n_items = 100
    n_knapsacks = 2
    feasible_rate = 0.5
    seed = 1
    
    rng = np.random.RandomState(seed=seed)
    weights = rng.randint(10, 101, size=(n_knapsacks, n_items))
    profits = rng.randint(10, 101, size=(n_knapsacks, n_items))
    constraints = (np.sum(weights, axis=1) * feasible_rate).astype(np.int)
    
    def objective(trial):
        xs = np.array([trial.suggest_categorical(f"x_{i}", (0, 1)) for i in range(weights.shape[1])])
        total_weights = np.sum(weights * xs, axis=1)
        total_profits = np.sum(profits * xs, axis=1)
    
      # Constraints which are considered feasible if less than or equal to zero.
        constraints_violation = total_weights - constraints
        trial.set_user_attr("constraint", constraints_violation.tolist())
    
        return total_profits.tolist()
    
    def constraints_func(trial):
        return trial.user_attrs["constraint"]
    
    sampler = optuna.samplers.NSGAIISampler(population_size=10, constraints_func=constraints_func)
    
    study = optuna.create_study(directions=["maximize"] * n_knapsacks, sampler=sampler)
    study.optimize(objective, n_trials=200)
    

    cnsga2-knapsack-anim

    New Features

    • Ask-and-Tell API (Study.ask, Study.tell) (#2158)
    • Add constraints_func argument to NSGA-II (#2175)
    • Add heartbeat functionality using threading (#2190)
    • Add Study.add_trials to simplify creating customized study (#2261)

    Enhancements

    • Support log scale in parallel coordinate (#2164, thanks @tohmae!)
    • Warn if constraints are missing in constrained NSGA-II (#2205)
    • Immediately persist suggested parameters with _CachedStorage (#2214)
    • Include the upper bound of uniform/loguniform distributions (#2223)

    Bug Fixes

    • Call base sampler's after_trial in PartialFixedSampler (#2209)
    • Fix trials_dataframe for multi-objective optimization with fail or pruned trials (#2265)
    • Fix calculate_weights_below method of MOTPESampler (#2274, thanks @y0z!)

    Installation

    • Remove version constraint for AllenNLP and run allennlp_*.py on GitHub Actions (#2226)
    • Pin mypy==0.790 (#2259)
    • Temporarily avoid AllenNLP v2 (#2276)

    Documentation

    • Add callback & (add|enqueue)_trial recipe (#2125)
    • Make create_trial's documentation and tests richer (#2126)
    • Move import lines of the callback recipe (A follow-up of #2125) (#2221)
    • Fix optuna/samplers/_base.py typo (#2239)
    • Introduce optuna-dashboard on README (#2224)

    Examples

    • Refactor examples/multi_objective/pytorch_simple.py (#2230)
    • Move BoTorch example to examples/multi_objective directory (#2244)
    • Refactor examples/multi_objective/botorch_simple.py (#2245, thanks @nzw0301!)
    • Fix typo in examples/mlflow (#2258, thanks @nzw0301!)

    Tests

    • Make create_trial's documentation and tests richer (#2126)
    • Fix unit test of median pruner (#2171)
    • SkoptSampler acquisition function in test to more likely converge (#2194)
    • Diet tests/test_study.py (#2218)
    • Diet tests/test_trial.py (#2219)
    • Shorten the names of trial tests (#2228)
    • Move STORAGE_MODES to testing/storage.py (#2231)
    • Remove duplicate test on study_tests/test_optimize.py (#2232)
    • Add init files in test directories (#2257)

    Code Fixes

    • Code quality improvements (#2009, thanks @srijan-deepsource!)
    • Refactor CMA-ES sampler with search space transform (#2193)
    • BoTorchSampler minor code fix reducing dictionary lookup and clearer type behavior (#2195)
    • Fix bad warning message in BoTorchSampler (#2197)
    • Use study.get_trials instead of study._storage.get_all_trials (#2208)
    • Ensure uniform and loguniform distributions less than high boundary (#2243)

    Continuous Integration

    • Add RDBStorage tests in github actions (#2200)
    • Publish distributions to TestPyPI on each day (#2220)
    • Rename GitHub Actions jobs for (Test)PyPI uploads (#2254)

    Other

    • Fix the syntax of pypi-publish.yml (#2187)
    • Fix mypy local fails in tests/test_deprecated.py and tests/test_experimental.py (#2191)
    • Add an explanation of "no period in the PR title" to CONTRIBUTING.md (#2192)
    • Bump up version number to 2.5.0.dev (#2238)
    • Fix mypy version to 0.790 (Follow-up of 2259) (#2260)
    • Bump up version number to v2.5.0 (#2282)

    Thanks to All the Contributors!

    This release was made possible by authors, and everyone who participated in reviews and discussions.

    @Crissman, @HideakiImamura, @c-bata, @crcrpar, @g-votte, @himkt, @hvy, @keisuke-umezawa, @not522, @nzw0301, @sile, @srijan-deepsource, @tohmae, @toshihikoyanase, @y0z, @ytsmiling

    Source code(tar.gz)
    Source code(zip)
  • v2.4.0(Jan 12, 2021)

    This is the release note of v2.4.0.

    Highlights

    Python 3.9 Support

    This is the first version to officially support Python 3.9. Everything is tested with the exception of certain integration modules under optuna.integration. We will continue to extend the support in the coming releases.

    Multi-objective Optimization

    Multi-objective optimization in Optuna is now a stable first-class citizen. Multi-objective optimization allows optimizing multi objectives at the same time such as maximizing model accuracy while minimizing model inference time.

    Single-objective optimization can be extended to multi-objective optimization by

    1. specifying a sequence (e.g. a tuple) of directions instead of a single direction in optuna.create_study. Both parameters are supported for backwards compatibility
    2. (optionally) specifying a sampler that supports multi-objective optimization in optuna.create_study. If skipped, will default to the NSGAIISampler
    3. returning a sequence of values instead of a single value from the objective function

    Multi-objective Sampler

    Samplers that support multi-objective optimization are currently the NSGAIISampler, the MOTPESampler, the BoTorchSampler and the RandomSampler.

    Example

    import optuna
    
    def objective(trial):
        # The Binh and Korn function. It has two objectives to minimize.
        x = trial.suggest_float("x", 0, 5)
        y = trial.suggest_float("y", 0, 3)
    
        v0 = 4 * x ** 2 + 4 * y ** 2
        v1 = (x - 5) ** 2 + (y - 5) ** 2
        return v0, v1
    
    sampler = optuna.samplers.NSGAIISampler()
    study = optuna.create_study(directions=["minimize", "minimize"], sampler=sampler)
    study.optimize(objective, n_trials=100)
    
    # Get a list of the best trials.
    best_trials = study.best_trials  
    
    # Visualize the best trials (i.e. Pareto front) in blue.
    fig = optuna.visualization.plot_pareto_front(study, target_names=["v0", "v1"])
    fig.show()
    

    v240_pareto_front

    Migrating from the Experimental optuna.multi_objective

    optuna.multi_objective, used to be an experimental submodule for multi-objective optimization. This submodule is now deprecated. Changes required to migrate to the new interfaces are subtle as described by the steps in the previous section.

    Database Storage Schema Upgrade

    With the introduction of multi-objective optimization, the database storage schema for the RDBStorage has been changed. To continue to use databases from v2.3, run the following command to upgrade your tables. Please create a backup of the database before.

    optuna storage upgrade --storage <URL to the storage, e.g. sqlite:///example.db>
    

    BoTorch Sampler

    BoTorchSampler is an experimental sampler based on BoTorch. BoTorch is a library for Bayesian optimization using PyTorch. See example for an example usage.

    Constrained Optimization

    For the first time in Optuna, BoTorchSampler allows constrained optimization. Users can impose constraints on hyperparameters or objective function values as follows.

    import optuna
    
    def objective(trial):
        x = trial.suggest_float("x", -15, 30)
        y = trial.suggest_float("y", -15, 30)
    
        # Constraints which are considered feasible if less than or equal to zero.
        # The feasible region is basically the intersection of a circle centered at (x=5, y=0)
        # and the complement to a circle centered at (x=8, y=-3).
        c0 = (x - 5) ** 2 + y ** 2 - 25
        c1 = -((x - 8) ** 2) - (y + 3) ** 2 + 7.7
    
        # Store the constraints as user attributes so that they can be restored after optimization.
        trial.set_user_attr("constraint", (c0, c1))
    
        return x ** 2 + y ** 2
    
    def constraints(trial):
        return trial.user_attrs["constraint"]
    
    # Specify the constraint function when instantiating the `BoTorchSampler`.
    sampler = optuna.integration.BoTorchSampler(constraints_func=constraints)
    study = optuna.create_study(sampler=sampler)
    study.optimize(objective, n_trials=32)
    

    Multi-objective Optimization

    BoTorchSampler supports both single- and multi-objective optimization. By default, the sampler selects the appropriate sampling algorithm with respect to the number of objectives.

    Customizability

    BoTorchSampler is customizable via the candidates_func callback parameter. Users familiar with BoTorch can change the surrogate model, acquisition function, and its optimizer in this callback to utilize any of the algorithms provided by BoTorch.

    Visualization with Callback Specified Target Values

    Visualization functions can now plot values other than objective values, such as inference time or evaluation by other metrics. Users can specify the values to be plotted by specifying the target argument. Even in multi-objective optimization, visualization functions can be available with the target argument along a specific objective.

    New Tutorials

    The tutorial has been improved and new content for each Optuna’s key feature have been added. More contents will be added in the future. Please look forward to it!

    Breaking Changes

    • Allow filtering trials from Study and BaseStorage based on TrialState (#1943)
    • Stop storing error stack traces in fail_reason in trial system_attr (#1964)
    • Importance with target values other than objective value (#2109)

    New Features

    • Implement plot_contour and _get_contour_plot with Matplotlib backend (#1782, thanks @ytknzw!)
    • Implement plot_param_importances and _get_param_importance_plot with Matplotlib backend (#1787, thanks @ytknzw!)
    • Implement plot_slice and _get_slice_plot with Matplotlib backend (#1823, thanks @ytknzw!)
    • Add PartialFixedSampler (#1892, thanks @norihitoishida!)
    • Allow filtering trials from Study and BaseStorage based on TrialState (#1943)
    • Add rung promotion limitation in ASHA/Hyperband to enable arbitrary unknown length runs (#1945, thanks @alexrobomind!)
    • Add Fastai V2 pruner callback (#1954, thanks @hal-314!)
    • Support options available on AllenNLP except to node_rank and dry_run (#1959)
    • Universal data transformer (#1987)
    • Introduce BoTorchSampler (#1989)
    • Add axis order for plot_pareto_front (#2000, thanks @okdshin!)
    • plot_optimization_history with target values other than objective value (#2064)
    • plot_contour with target values other than objective value (#2075)
    • plot_parallel_coordinate with target values other than objective value (#2089)
    • plot_slice with target values other than objective value (#2093)
    • plot_edf with target values other than objective value (#2103)
    • Importance with target values other than objective value (#2109)
    • Migrate optuna.multi_objective.visualization.plot_pareto_front (#2110)
    • Raise ValueError if target is None and study is for multi-objective optimization for plot_contour (#2112)
    • Raise ValueError if target is None and study is for multi-objective optimization for plot_edf (#2117)
    • Raise ValueError if target is None and study is for multi-objective optimization for plot_optimization_history (#2118)
    • plot_param_importances with target values other than objective value (#2119)
    • Raise ValueError if target is None and study is for multi-objective optimization for plot_parallel_coordinate (#2120)
    • Raise ValueError if target is None and study is for multi-objective optimization for plot_slice (#2121)
    • Trial post processing (#2134)
    • Raise NotImplementedError for trial.report and trial.should_prune during multi-objective optimization (#2135)
    • Raise ValueError in TPE and CMA-ES if study is being used for multi-objective optimization (#2136)
    • Raise ValueError if target is None and study is for multi-objective optimization for get_param_importances, BaseImportanceEvaluator.evaluate, and plot_param_importances (#2137)
    • Raise ValueError in integration samplers if study is being used for multi-objective optimization (#2145)
    • Migrate NSGA2 sampler (#2150)
    • Migrate MOTPE sampler (#2167)
    • Storages to query trial IDs from numbers (#2168)

    Enhancements

    • Use context manager to treat session correctly (#1628)
    • Integrate multi-objective optimization module for the storages, study, and frozen trial (#1994)
    • Pass include_package to AllenNLP for distributed setting (#2018)
    • Change the RDB schema for multi-objective integration (#2030)
    • Update pruning callback for xgboost 1.3 (#2078, thanks @trivialfis!)
    • Fix log format for single objective optimization to include best trial (#2128)
    • Implement Study._is_multi_objective() to check whether study has multiple objectives (#2142, thanks @nyanhi!)
    • TFKerasPruningCallback to warn when an evaluation metric does not exist (#2156, thanks @bigbird555!)
    • Warn default target name when target is specified (#2170)
    • Study.trials_dataframe for multi-objective optimization (#2181)

    Bug Fixes

    • Make always compute weights_below in MOTPEMultiObjectiveSampler (#1979)
    • Fix the range of categorical values (#1983)
    • Remove circular reference of study (#2079)
    • Fix flipped colormap in matplotlib backend plot_parallel_coordinate (#2090)
    • Replace builtin isnumerical to capture float values in plot_contour (#2096, thanks @nzw0301!)
    • Drop unnecessary constraint from upgraded trial_values table (#2180)

    Installation

    • Ignore tests directory on install (#2015, thanks @130ndim!)
    • Clean up setup.py requirements (#2051)
    • Pin xgboost<1.3 (#2084)
    • Bump up PyTorch version (#2094)

    Documentation

    • Update tutorial (#1722)
    • Introduce plotly directive (#1944, thanks @harupy!)
    • Check everything by blackdoc (#1982)
    • Remove codecov from CONTRIBUTING.md (#2005)
    • Make the visualization examples deterministic (#2022, thanks @harupy!)
    • Use plotly directive in plot_pareto_front (#2025)
    • Remove plotly scripts and unused generated files (#2026)
    • Add mandarin link to ReadTheDocs layout (#2028)
    • Document about possible duplicate parameter configurations in GridSampler (#2040)
    • Fix MOTPEMultiObjectiveSampler's example (#2045, thanks @norihitoishida!)
    • Fix Read the Docs build failure caused by pip install --find-links (#2065)
    • Fix lt symbol (#2068, thanks @KoyamaSohei!)
    • Fix parameter section of RandomSampler in docs (#2071, thanks @akihironitta!)
    • Add note on the behavior of suggest_float with step argument (#2087)
    • Tune build time of #2076 (#2088)
    • Add matplotlib.plot_parallel_coordinate example (#2097, thanks @nzw0301!)
    • Add matplotlib.plot_param_importances example (#2098, thanks @nzw0301!)
    • Add matplotlib.plot_slice example (#2099, thanks @nzw0301!)
    • Add matplotlib.plot_contour example (#2100, thanks @nzw0301!)
    • Bump Sphinx up to 3.4.0 (#2127)
    • Additional docs about optuna.multi_objective deprecation (#2132)
    • Move type hints to description from signature (#2147)
    • Add copy button to all the code examples (#2148)
    • Fix wrong wording in distributed execution tutorial (#2152)

    Examples

    • Add MXNet Gluon example (#1985)
    • Update logging in PyTorch Lightning example (#2037, thanks @pbmstrk!)
    • Change return type of training_step of PyTorch Lightning example (#2043)
    • Fix dead links in examples/README.md (#2056, thanks @nai62!)
    • Add enqueue_trial example (#2059)
    • Skip FastAI v2 example in examples job (#2108)
    • Move examples/multi_objective/plot_pareto_front.py to examples/visualization/plot_pareto_front.py (#2122)
    • Use latest multi-objective functionality in multi-objective example (#2123)
    • Add haiku and jax simple example (#2155, thanks @nzw0301!)

    Tests

    • Update parametrize_sampler of test_samplers.py (#2020, thanks @norihitoishida!)
    • Change trail_id + 123 -> trial_id (#2052)
    • Fix scipy==1.6.0 test failure with LogisticRegression (#2166)

    Code Fixes

    • Introduce plotly directive (#1944, thanks @harupy!)
    • Stop storing error stack traces in fail_reason in trial system_attr (#1964)
    • Check everything by blackdoc (#1982)
    • HPI with _SearchSpaceTransform (#1988)
    • Fix TODO comment about orders of dicts (#2007)
    • Add __all__ to reexport modules explicitly (#2013)
    • Update CmaEsSampler's warning message (#2019, thanks @norihitoishida!)
    • Put up an alias for structs.StudySummary against study.StudySummary (#2029)
    • Deprecate optuna.type_checking module (#2032)
    • Remove py35 from black config in pyproject.toml (#2035)
    • Use model methods instead of session.query() (#2060)
    • Use find_or_raise_by_id instead of find_by_id to raise if a study does not exist (#2061)
    • Organize and remove unused model methods (#2062)
    • Leave a comment about RTD compromise (#2066)
    • Fix ideographic space (#2067, thanks @KoyamaSohei!)
    • Make new visualization parameters keyword only (#2082)
    • Use latest APIs in LightGBMTuner (#2083)
    • Add matplotlib.plot_slice example (#2099, thanks @nzw0301!)
    • Deprecate previous multi-objective module (#2124)
    • _run_trial refactoring (#2133)
    • Cosmetic fix of xgboost integration (#2143)

    Continuous Integration

    • Partial support of python 3.9 (#1908)
    • Check everything by blackdoc (#1982)
    • Avoid set-env in GitHub Actions (#1992)
    • PyTorch and AllenNLP (#1998)
    • Remove checks from circleci (#2004)
    • Migrate tests and coverage to GitHub Actions (#2027)
    • Enable blackdoc --diff option (#2031)
    • Unpin mypy version (#2069)
    • Skip FastAI v2 example in examples job (#2108)
    • Fix CI examples for Py3.6 (#2129)

    Other

    • Add tox.ini (#2024)
    • Allow passing additional arguments when running tox (#2054, thanks @harupy!)
    • Add Python 3.9 to README badge (#2063)
    • Clarify that generally pull requests need two or more approvals (#2104)
    • Release wheel package via PyPI (#2105)
    • Adds news entry about the Python 3.9 support (#2114)
    • Add description for tox to CONTRIBUTING.md (#2159)
    • Bump up version number to 2.4.0 (#2183)
    • [Backport] Fix the syntax of pypi-publish.yml (#2188)

    Thanks to All the Contributors!

    This release was made possible by authors, and everyone who participated in reviews and discussions.

    @130ndim, @Crissman, @HideakiImamura, @KoyamaSohei, @akihironitta, @alexrobomind, @bigbird555, @c-bata, @crcrpar, @eytan, @g-votte, @hal-314, @harupy, @himkt, @hvy, @keisuke-umezawa, @nai62, @norihitoishida, @not522, @nyanhi, @nzw0301, @okdshin, @pbmstrk, @sdaulton, @sile, @toshihikoyanase, @trivialfis, @ytknzw, @ytsmiling

    Source code(tar.gz)
    Source code(zip)
  • v2.3.0(Nov 4, 2020)

    This is the release note of v2.3.0.

    Highlights

    Multi-objective TPE sampler

    TPE sampler now supports multi-objective optimization. This new algorithm is implemented in optuna.multi_objective and used viaoptuna.multi_objective.samplers.MOTPEMultiObjectiveSampler. See #1530 for the details.

    87849998-c7ba3c00-c927-11ea-8d5b-c7712f77abbe

    LightGBMTunerCV returns the best booster

    The best booster of LightGBMTunerCV can now be obtained in the same way as the LightGBMTuner. See #1609 and #1702 for details.

    PyTorch Lightning v1.0 support

    The integration with PyTorch Lightning v1.0 is available. The pruning feature of Optuna can be used with the new version of PyTorch Lightning using optuna.integration.PyTorchLightningPruningCallback. See #597 and #1926 for details.

    RAPIDS + Optuna example

    An example to illustrate how to use RAPIDS with Optuna is available. You can use this example to harness the computational power of the GPU along with Optuna.

    New Features

    • Introduce Multi-objective TPE to optuna.multi_objective.samplers (#1530, thanks @y0z!)
    • Return LGBMTunerCV booster (#1702, thanks @nyanhi!)
    • Implement plot_intermediate_values and _get_intermediate_plot with Matplotlib backend (#1762, thanks @ytknzw!)
    • Implement plot_optimization_history and _get_optimization_history_plot with Matplotlib backend (#1763, thanks @ytknzw!)
    • Implement plot_parallel_coordinate and _get_parallel_coordinate_plot with Matplotlib backend (#1764, thanks @ytknzw!)
    • Improve MLflow callback functionality: allow nesting, and attached study attrs (#1918, thanks @drobison00!)

    Enhancements

    • Copy datasets before objective evaluation (#1805)
    • Fix 'Mean of empty slice' warning (#1927, thanks @carefree0910!)
    • Add reseed_rng to NSGAIIMultiObjectiveSampler (#1938)
    • Add RDB support to MoTPEMultiObjectiveSampler (#1978)

    Bug Fixes

    • Add some jitters in _MultivariateParzenEstimators (#1923, thanks @kstoneriv3!)
    • Fix plot_contour (#1929, thanks @carefree0910!)
    • Fix return type of the multivariate TPE samplers (#1955, thanks @nzw0301!)
    • Fix StudyDirection of mape in LightGBMTuner (#1966)

    Documentation

    • Add explanation for most module-level reference pages (#1850, thanks @tktran!)
    • Revert module directives (#1873)
    • Remove with_trace method from docs (#1882, thanks @i-am-jeetu!)
    • Add CuPy to projects using Optuna (#1889)
    • Add more sphinx doc comments (#1894, thanks @yuk1ty!)
    • Fix a broken link in matplotlib.plot_edf (#1899)
    • Fix broken links in README.md (#1901)
    • Show module paths in optuna.visualization and optuna.multi_objective.visualization (#1902)
    • Add a short description to the example in FAQ (#1903)
    • Embed plot_edf figure in documentation by using matplotlib plot directive (#1905, thanks @harupy!)
    • Fix plotly figure iframe paths (#1906, thanks @harupy!)
    • Update docstring of CmaEsSampler (#1909)
    • Add matplotlib.plot_intermediate_values figure to doc (#1933, thanks @harupy!)
    • Add matplotlib.plot_optimization_history figure to doc (#1934, thanks @harupy!)
    • Make code example of MOTPEMultiObjectiveSampler executable (#1953)
    • Add Raises comments to samplers (#1965, thanks @yuk1ty!)

    Examples

    • Make src comments more descriptive in examples/pytorch_lightning_simple.py (#1878, thanks @iamshnoo!)
    • Add an external project in Optuna examples (#1888, thanks @resnant!)
    • Add RAPIDS + Optuna simple example (#1924, thanks @Nanthini10!)
    • Apply follow-up of #1924 (#1960)

    Tests

    • Fix RDB test to avoid deadlock when creating study (#1919)
    • Add a test to verify nest_trials for MLflowCallback works properly (#1932, thanks @harupy!)
    • Add a test to verify tag_study_user_attrs for MLflowCallback works properly (#1935, thanks @harupy!)

    Code Fixes

    • Fix typo (#1900)
    • Refactor Study.optimize (#1904)
    • Refactor Study.trials_dataframe (#1907)
    • Add variable annotation to optuna/logging.py (#1920, thanks @akihironitta!)
    • Fix duplicate stack traces (#1921, thanks @akihironitta!)
    • Remove _log_normal_cdf (#1922, thanks @kstoneriv3!)
    • Convert comment style type hints (#1950, thanks @akihironitta!)
    • Align the usage of type hints and instantiation of dictionaries (#1956, thanks @akihironitta!)

    Continuous Integration

    • Run documentation build and doctest in GitHub Actions (#1891)
    • Resolve conflict of job-id of GitHub Actions workflows (#1898)
    • Pin mypy==0.782 (#1913)
    • Run allennlp_jsonnet.py on GitHub Actions (#1915)
    • Fix for PyTorch Lightning 1.0 (#1926)
    • Check blackdoc in CI (#1958)
    • Fix path for store_artifacts step in document CircleCI job (#1962, thanks @harupy!)

    Other

    • Fix how to check the format, coding style, and type hints (#1755)
    • Fix typo (#1968, thanks @nzw0301!)

    Thanks to All the Contributors!

    This release was made possible by authors, and everyone who participated in reviews and discussions.

    @Crissman, @HideakiImamura, @Nanthini10, @akihironitta, @c-bata, @carefree0910, @crcrpar, @drobison00, @harupy, @himkt, @hvy, @i-am-jeetu, @iamshnoo, @keisuke-umezawa, @kstoneriv3, @nyanhi, @nzw0301, @resnant, @sile, @smly, @tktran, @toshihikoyanase, @y0z, @ytknzw, @yuk1ty

    Source code(tar.gz)
    Source code(zip)
  • v2.2.0(Oct 5, 2020)

    This is the release note of v2.2.0.

    In this release, we drop support for Python 3.5. If you are using Python 3.5, please consider upgrading your Python environment to Python 3.6 or newer, or install older versions of Optuna.

    Highlights

    Multivariate TPE sampler

    TPESampler is updated with an experimental option to enable multivariate sampling. This algorithm captures dependencies among hyperparameters better than the previous algorithm. See #1767 for more details.

    Improved AllenNLP support

    AllenNLPExecutor supports pruning. It is introduced in the official hyperparameter search guide by AllenNLP. Both AllenNLPExecutor and the guide were written by @himkt. See #1772.

    allennlp-executor-jsonnet4

    New Features

    • Create optuna.visualization.matplotlib (#1756, thanks @ytknzw!)
    • Add multivariate TPE sampler (#1767, thanks @kstoneriv3!)
    • Support AllenNLPPruningCallback for AllenNLPExecutor (#1772)

    Enhancements

    • KerasPruningCallback to warn when an evaluation metric does not exist (#1759, thanks @bigbird555!)
    • Implement plot_edf and _get_edf_plot with Matplotlib backend (#1760, thanks @ytknzw!)
    • Fix exception chaining all over the codebase (#1781, thanks @akihironitta!)
    • Add metric alias of rmse for LightGBMTuner (#1807, thanks @upura!)
    • Update PyTorch-Lighting minor version (#1813, thanks @nzw0301!)
    • Improve TensorBoardCallback (#1814, thanks @sfujiwara!)
    • Add metric alias for LightGBMTuner (#1822, thanks @nyanhi!)
    • Introduce a new argument to plot all evaluation points by optuna.multi_objective.visualization.plot_pareto_front (#1824, thanks @nzw0301!)
    • Add reseed_rng to RandomMultiobjectiveSampler (#1831, thanks @y0z!)

    Bug Fixes

    • Fix fANOVA for IntLogUniformDistribution (#1788)
    • Fix mypy in an environment where some dependencies are installed (#1804)
    • Fix WFG._compute() (#1812, thanks @y0z!)
    • Fix contour plot error for categorical distributions (#1819, thanks @zchenry!)
    • Store CMAES optimizer after splitting into substrings (#1833)
    • Add maximize support on CmaEsSampler (#1849)
    • Add matplotlib directory to optuna.visualization.__init__.py (#1867)

    Installation

    • Update setup.py to drop Python 3.5 support (#1818, thanks @harupy!)
    • Add Matplotlib to setup.py (#1829, thanks @ytknzw!)

    Documentation

    • Fix plot_pareto_front preview path (#1808)
    • Fix indents of the example of multi_objective.visualization.plot_pareto_front (#1815, thanks @nzw0301!)
    • Hide __init__ from docs (#1820, thanks @upura!)
    • Explicitly omit Python 3.5 from README.md (#1825)
    • Follow-up #1832: alphabetical naming and fixes (#1841)
    • Mention isort in the contribution guidelines (#1842)
    • Add news sections about introduction of isort (#1843)
    • Add visualization.matpltlib to docs (#1847)
    • Add sphinx doc comments regarding exceptions in the optimize method (#1857, thanks @yuk1ty!)
    • Avoid global study in Study.stop testcode (#1861)
    • Fix documents of visualization.is_available (#1869)
    • Improve ThresholdPruner example (#1876, thanks @fsmosca!)
    • Add logging levels to optuna.logging.set_verbosity (#1884, thanks @nzw0301!)

    Examples

    • Add XGBoost cross-validation example (#1836, thanks @sskarkhanis!)
    • Minor code fix of XGBoost examples (#1844)

    Code Fixes

    • Add default implementation of get_n_trials (#1568)
    • Introduce isort to automatically sort import statements (#1695, thanks @harupy!)
    • Avoid using experimental decorator on CmaEsSampler (#1777)
    • Remove logger member attributes from PyCmaSampler and CmaEsSampler (#1784)
    • Apply blackdoc (#1817)
    • Remove TODO (#1821, thanks @sfujiwara!)
    • Fix Redis example code (#1826)
    • Apply isort to visualization/matplotlib/ and multi_objective/visualization (#1830)
    • Move away from .scoring imports (#1864, thanks @norihitoishida!)
    • Add experimental decorator to matplotlib.* (#1868)

    Continuous Integration

    • Disable --cache-from if trigger of docker image build is release (#1791)
    • Remove Python 3.5 from CI checks (#1810, thanks @harupy!)
    • Update python version in docs (#1816, thanks @harupy!)
    • Migrate checks to GitHub Actions (#1838)
    • Add option --diff to black (#1840)

    Thanks to All the Contributors!

    This release was made possible by authors, and everyone who participated in reviews and discussions.

    @HideakiImamura, @akihironitta, @bigbird555, @c-bata, @crcrpar, @fsmosca, @g-votte, @harupy, @himkt, @hvy, @keisuke-umezawa, @kstoneriv3, @norihitoishida, @nyanhi, @nzw0301, @sfujiwara, @sile, @sskarkhanis, @toshihikoyanase, @upura, @y0z, @ytknzw, @yuk1ty, @zchenry

    Source code(tar.gz)
    Source code(zip)
  • v2.1.0(Sep 7, 2020)

    This is the release note of v2.1.0.

    Optuna v2.1.0 will be the last version to support Python 3.5. See #1067.

    Highlights

    Allowing objective(study.best_trial)

    FrozenTrial used to subclass object but now implements BaseTrial. It can be used in places where a Trial is expected, including user-defined objective functions.

    Re-evaluating the objective functions with the best parameter configuration is now straight forward. See #1503 for more details.

    study.optimize(objective, n_trials=100)
    best_trial = study.best_trial
    best_value = objective(best_trial)  # Did not work prior to v2.1.0.
    

    IPOP-CMA-ES Sampling Algorithm

    CmaEsSampler comes with an experimental option to switch to IPOP-CMA-ES. This algorithm restarts the strategy with an increased population size after premature convergence, allowing a more explorative search. See #1548 for more details.

    image

    Comparing the new option with the previous CmaEsSampler and RandomSampler.

    Optuna & MLFlow on Kubernetes Example

    Optuna can be easily integrated with MLFlow on Kubernetes clusters. The example contained here is a great introduction to get you started with a few lines of commands. See #1464 for more details.

    Providing Type Hinting to Applications

    Type hint information is packaged following PEP 561. Users of Optuna can now run style checkers against the framework. Note that the applications which ignore missing imports may raise new type-check errors due to this change. See #1720 for more details.

    Breaking Changes

    Configuration files for AllenNLPExecutor may need to be updated. See #1544 for more details.

    • Remove allennlp.common.params.infer_and_cast from AllenNLP integrations (#1544)
    • Deprecate optuna.integration.KerasPruningCallback (#1670, thanks @VamshiTeja!)
    • Make Optuna PEP 561 Compliant (#1720, thanks @MarioIshac!)

    New Features

    • Add sampling functions to FrozenTrial (#1503, thanks @nzw0301!)
    • Add modules to compute hypervolume (#1537)
    • Add IPOP-CMA-ES support in CmaEsSampler (#1548)
    • Implement skorch pruning callback (#1668)

    Enhancements

    • Make sampling from trunc-norm efficient in TPESampler (#1562)
    • Add trials to cache when awaking WAITING trials in _CachedStorage (#1570)
    • Add log in create_new_study method of storage classes (#1629, thanks @tohmae!)
    • Add border to markers in contour plot (#1691, thanks @zchenry!)
    • Implement hypervolume calculator for two-dimensional space (#1771)

    Bug Fixes

    • Avoid to sample the value which equals to upper bound (#1558)
    • Exit thread after session is destroyed (#1676, thanks @KoyamaSohei!)
    • Disable feature_pre_filter in LightGBMTuner (#1774)
    • Fix fANOVA for IntLogUniformDistribution (#1790)

    Installation

    • Add packaging in install_requires (#1551)
    • Fix failure of Keras integration due to TF2.3 (#1563)
    • Install fsspec<0.8.0 for Python 3.5 (#1596)
    • Specify the version of packaging to >= 20.0 (#1599, thanks @Isa-rentacs!)
    • Install lightgbm<3.0.0 to circumvent error with feature_pre_filter (#1773)

    Documentation

    • Fix link to the definition of StudySummary (#1533, thanks @nzw0301!)
    • Update log format in docs (#1538)
    • Integrate Sphinx Gallery to make tutorials easily downloadable (#1543)
    • Add AllenNLP pruner to list of pruners in tutorial (#1545)
    • Refine the help of study-name (#1565, thanks @belldandyxtq!)
    • Simplify contribution guidelines by removing rule about PR title naming (#1567)
    • Remove license section from README.md (#1573)
    • Update key features (#1582)
    • Simplify documentation of BaseDistribution.single (#1593)
    • Add navigation links for contributors to README.md (#1597)
    • Apply minor changes to CONTRIBUTING.md (#1601)
    • Add list of projects using Optuna to examples/README.md (#1605)
    • Add a news section to README.md (#1606)
    • Avoid the latest stable sphinx (#1613)
    • Add link to examples in tutorial (#1625)
    • Add the description of default pruner (MedianPruner) to the documentation (#1657, thanks @Chillee!)
    • Remove generated directories with make clean (#1658)
    • Delete a useless auto generated directory (#1708)
    • Divide a section for each integration repository (#1709)
    • Add example to optuna.study.create_study (#1711, thanks @Ruketa!)
    • Add example to optuna.study.load_study (#1712, thanks @bigbird555!)
    • Fix broken doctest example code (#1713)
    • Add some notes and usage example for the hypervolume computing module (#1715)
    • Fix issue where doctests are not executed (#1723, thanks @harupy!)
    • Add example to optuna.study.Study.optimize (#1726, thanks @norihitoishida!)
    • Add target for doctest to Makefile (#1732, thanks @harupy!)
    • Add example to optuna.study.delete_study (#1741, thanks @norihitoishida!)
    • Add example to optuna.study.get_all_study_summaries (#1742, thanks @norihitoishida!)
    • Add example to optuna.study.Study.set_user_attr (#1744, thanks @norihitoishida!)
    • Add example to optuna.study.Study.user_attrs (#1745, thanks @norihitoishida!)
    • Add example to optuna.study.Study.get_trials (#1746, thanks @norihitoishida!)
    • Add example to optuna.multi_objective.study.MultiObjectiveStudy.optimize (#1747, thanks @norihitoishida!)
    • Add explanation for optuna.trial (#1748)
    • Add example to optuna.multi_objective.study.create_study (#1749, thanks @norihitoishida!)
    • Add example to optuna.multi_objective.study.load_study (#1750, thanks @norihitoishida!)
    • Add example to optuna.study.Study.stop (#1752, thanks @Ruketa!)
    • Re-generate contour plot example with padding (#1758)

    Examples

    • Add an example of Kubernetes, PyTorchLightning, and MLflow (#1464)
    • Create study before multiple workers are launched in Kubernetes MLflow example (#1536)
    • Fix typo in examples/kubernetes/mlflow/README.md (#1540)
    • Reduce search space for AllenNLP example (#1542)
    • Introduce plot_param_importances in example (#1555)
    • Removing references to deprecated optuna study optimize commands from examples (#1566, thanks @ritvik1512!)
    • Add scripts to run examples/kubernetes/* (#1584, thanks @VamshiTeja!)
    • Update Kubernetes example of "simple" to avoid potential errors (#1600, thanks @Nishikoh!)
    • Implement skorch pruning callback (#1668)
    • Add a tf.keras example (#1681, thanks @sfujiwara!)
    • Update examples/pytorch_simple.py (#1725, thanks @wangxin0716!)
    • Fix Binh and Korn function in MO example (#1757)

    Tests

    • Test _CachedStorage in test_study.py (#1575)
    • Rename tests/multi_objective as tests/multi_objective_tests (#1586)
    • Do not use deprecated pytorch_lightning.data_loader decorator (#1667)
    • Add test for hypervolume computation for solution sets with duplicate points (#1731)

    Code Fixes

    • Match the order of definition in trial (#1528, thanks @nzw0301!)
    • Add type hints to storage (#1556)
    • Add trials to cache when awaking WAITING trials in _CachedStorage (#1570)
    • Use packaging to check the library version (#1610, thanks @VamshiTeja!)
    • Fix import order of packaging.version (#1623)
    • Refactor TPE's sample_from_categorical_dist (#1630)
    • Fix error messages in TPESampler (#1631, thanks @kstoneriv3!)
    • Add code comment about n_ei_candidates for categorical parameters (#1637)
    • Add type hints into optuna/integration/keras.py (#1642, thanks @airyou!)
    • Fix how to use black in CONTRIBUTING.md (#1646) 1- Add type hints into optuna/cli.py (#1648, thanks @airyou!)
    • Add type hints into optuna/dashboard.py, optuna/integration/__init__.py (#1653, thanks @airyou!)
    • Add type hints optuna/integration/_lightgbm_tuner (#1655, thanks @upura!)
    • Fix LightGBM Tuner import code (#1659)
    • Add type hints to optuna/storages/__init__.py (#1661, thanks @akihironitta!)
    • Add type hints to optuna/trial (#1662, thanks @upura!)
    • Enable flake8 E231 (#1663, thanks @harupy!)
    • Add type hints to optuna/testing (#1665, thanks @upura!)
    • Add type hints to tests/storages_tests/rdb_tests (#1666, thanks @akihironitta!)
    • Add type hints to optuna/samplers (#1673, thanks @akihironitta!)
    • Fix type hint of optuna.samplers._random (#1678, thanks @nyanhi!)
    • Add type hints into optuna/integration/mxnet.py (#1679, thanks @norihitoishida!)
    • Fix type hint of optuna/pruners/_nop.py (#1680, thanks @Ruketa!)
    • Update Type Hints: prunes/_percentile.py and prunes/_median.py (#1682, thanks @ytknzw!)
    • Fix incorrect type annotations for args and kwargs (#1684, thanks @harupy!)
    • Update type hints in optuna/pruners/_base.py and optuna/pruners/_successive_halving.py (#1685, thanks @ytknzw!)
    • Add type hints to test_optimization_history.py (#1686, thanks @yosupo06!)
    • Fix type hint of tests/pruners_tests/test_median.py (#1687, thanks @polyomino-24!)
    • Type hint and reformat of files under visualization_tests (#1689, thanks @gasin!)
    • Remove unused argument trial from optuna.samplers._tpe.sampler._get_observation_pairs (#1692, thanks @ytknzw!)
    • Add type hints into optuna/integration/chainer.py (#1693, thanks @norihitoishida!)
    • Add type hints to optuna/integration/tensorflow.py (#1698, thanks @uenoku!)
    • Add type hints into optuna/integration/chainermn.py (#1699, thanks @norihitoishida!)
    • Add type hints to optuna/integration/xgboost.py (#1700, thanks @Ruketa!)
    • Add type hints to files under tests/integration_tests (#1701, thanks @gasin!)
    • Use Optional for keyword arguments that default to None (#1703, thanks @harupy!)
    • Fix type hint of all the rest files under tests/ (#1704, thanks @gasin!)
    • Fix type hint of optuna/integration (#1705, thanks @akihironitta!)
    • Add l2 metric aliases to LightGBMTuner (#1717, thanks @thigm85!)
    • Convert type comments in optuna/study.py into type annotations (#1724, thanks @harupy!)
    • Apply black==20.8b1 (#1730)
    • Fix type hint of optuna/integration/sklearn.py (#1735, thanks @akihironitta!)
    • Add type hints into optuna/structs.py (#1743, thanks @norihitoishida!)
    • Fix typo in optuna/samplers/_tpe/parzen_estimator.py (#1754, thanks @akihironitta!)

    Continuous Integration

    • Temporarily skip allennlp_jsonnet.py example in CI (#1527)
    • Run TensorFlow on Python 3.8 (#1564)
    • Bump PyTorch to 1.6 (#1572)
    • Skip entire allennlp example directory in CI (#1585)
    • Use actions/[email protected] (#1594)
    • Add cache to GitHub Actions Workflows (#1595)
    • Run example after docker build to ensure that built image is setup properly (#1635, thanks @harupy!)
    • Use cache-from to build docker image faster (#1638, thanks @harupy!)
    • Fix issue where doctests are not executed (#1723, thanks @harupy!)

    Other

    • Remove Swig installation from Dockerfile (#1462)
    • Add: How to run examples with our Docker images (#1554)
    • GitHub Action labeler (#1591)
    • Do not trigger labeler on push (#1624)
    • Fix invalid YAML syntax (#1626)
    • Pin sphinx version to 3.0.4 (#1627, thanks @harupy!)
    • Add .dockerignore (#1633, thanks @harupy!)
    • Fix how to use black in CONTRIBUTING.md (#1646)
    • Add pyproject.toml for easier use of black (#1649)
    • Fix docs/Makefile (#1650)
    • Ignore vscode configs (#1660)
    • Make Optuna PEP 561 Compliant (#1720, thanks @MarioIshac!)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.0(Jul 29, 2020)

    This is the release note of v2.0.0.

    Highlights

    The second major version of Optuna 2.0 is released. It accommodates a multitude of new features, including Hyperband pruning, hyperparameter importance, built-in CMA-ES support, grid sampler, and LightGBM integration. Storage access is also improved, significantly speeding up optimization. Documentation has been revised and navigation is made easier. See the blog for details.

    Hyperband Pruner

    The stable version of HyperbandPruner is available with a simpler interface and improved performance.

    Hyperparameter Importance

    The stable version of the hyperparameter importance module is available.

    • Our implementation of fANOVA, FanovaImportanceEvaluator is now the default importance evaluator. This replaces the previous requirement for fanova with scikit-learn.
    • A new importance visualization function visualization.plot_param_importances.

    image7

    Built-in CMA-ES Sampler

    The stable version of CmaEsSampler is available. This new CmaEsSampler can be used with pruning for major performance improvements.

    Grid Sampler

    The stable version of GridSampler is available through an intuitive interface for users familiar with Optuna. When the entire grid is exhausted, the optimization stops automatically, so you can specify n_trials=None.

    LightGBM Tuner

    The stable version of LightGBMTuner is available. The behavior regarding verbosity option has been improved. The random seed was fixed unexpectedly if the verbosity level is not zero, but now the user given seed is used correctly.

    Experimental Features

    • New integration modules: TensorBoard integration, Catalyst integration, and AllenNLP pruning integration are available as experimental.
    • A new visualization function for multi-objective optimization: multi_objective.visualization.plot_pareto_front is available as an experimental feature.
    • New methods to manually create/add trials: trial.create_trial and study.Study.add_trial are available as experimental features.

    Breaking Changes

    Several deprecated features (e.g., Study.study_id and Trial.trial_id) are removed. See #1346 for details.

    • Remove deprecated features in optuna.trial (#1371)
    • Remove deprecated arguments from LightGBMTuner (#1374)
    • Remove deprecated features in integration/chainermn.py (#1375)
    • Remove deprecated features in optuna/structs.py (#1377)
    • Remove deprecated features in optuna/study.py (#1379)

    Several features are deprecated.

    • Deprecate optuna study optimize command (#1384)
    • Deprecate step argument in IntLogUniformDistribution (#1387, thanks @nzw0301!)

    Other.

    • BaseStorage.set_trial_param to return None instead of bool (#1327)
    • Match suggest_float and suggest_int specifications on step and log arguments (#1329)
    • BaseStorage.set_trial_intermediate_valute to return None instead of bool (#1337)
    • Make optuna.integration.lightgbm_tuner private (#1378)
    • Fix pruner index handling to 0-indexing (#1430, thanks @bigbird555!)
    • Continue to allow using IntLogUnioformDistribution.step during deprecation (#1438)
    • Align LightGBMTuner verbosity level to the original LightGBM (#1504)

    New Features

    • Add snippet of API for integration with Catalyst (#1056, thanks @VladSkripniuk!)
    • Add pruned trials to trials being considered in CmaEsSampler (#1229)
    • Add pruned trials to trials being considered in SkoptSampler (#1431)
    • Add TensorBoard integration (#1244, thanks @VladSkripniuk!)
    • Add deprecation decorator (#1382)
    • Add plot_pareto_front function (#1303)
    • Remove experimental decorator from HyperbandPruner (#1435)
    • Remove experimental decorators from hyperparameter importance (HPI) features (#1440)
    • Remove experimental decorator from Study.stop (#1450)
    • Remove experimental decorator from GridSampler (#1451)
    • Remove experimental decorators from LightGBMTuner (#1452)
    • Introducing optuna.visualization.plot_param_importances (#1299)
    • Rename integration/CmaEsSampler to integration/PyCmaSampler (#1325)
    • Match suggest_float and suggest_int specifications on step and log arguments (#1329)
    • optuna.create_trial and Study.add_trial to create custom studies (#1335)
    • Allow omitting the removal version in deprecated (#1418)
    • Mark CatalystPruningCallback integration as experimental (#1465)
    • Followup TensorBoard integration (#1475)
    • Implement a pruning callback for AllenNLP (#1399)
    • Remove experimental decorator from HPI visualization (#1477)
    • Add optuna.visualization.plot_edf function (#1482)
    • FanovaImportanceEvaluator as default importance evaluator (#1491)
    • Reduce HPI variance with default args (#1492)

    Enhancements

    • Support automatic stop of GridSampler (#1026)
    • Implement fANOVA using sklearn instead of fanova (#1106)
    • Add a caching mechanism to make NSGAIIMultiObjectiveSampler faster (#1257)
    • Add log argument support for suggest_int of skopt integration (#1277, thanks @nzw0301!)
    • Add read_trials_from_remote_storage method to Storage implementations (#1298)
    • Implement log argument for suggest_int of pycma integration (#1302)
    • Raise ImportError if bokeh version is 2.0.0 or newer (#1326)
    • Fix the x-axis title of the hyperparameter importances plot (#1336, thanks @harupy!)
    • BaseStorage.set_trial_intermediate_valute to return None instead of bool (#1337)
    • Simplify log messages (#1345)
    • Improve layout of plot_param_importances figure (#1355)
    • Do not run the GC after every trial by default (#1380)
    • Skip storage access if logging is disabled (#1403)
    • Specify stacklevel for warnings.warn for more helpful warning message (#1419, thanks @harupy!)
    • Replace DeprecationWarning with FutureWarning in @deprecated (#1428)
    • Fix pruner index handling to 0-indexing (#1430, thanks @bigbird555!)
    • Load environment variables in AllenNLPExecutor (#1449)
    • Stop overwriting seed in LightGBMTuner (#1461)
    • Suppress progress bar of LightGBMTuner if verbosity == 1 (#1460)
    • RDB storage to do eager backref "join"s when fetching all trials (#1501)
    • Overwrite intermediate values correctly (#1517)
    • Overwrite parameters correctly (#1518)
    • Always cast choices into tuple in CategoricalDistribution (#1520)

    Bug Fixes

    RDB Storage Bugs on Distributed Optimization are Fixed

    Several critical bugs are addressed in this release with the RDB storage, most related to distributed optimization.

    • Fix CMA-ES boundary constraints and initial mean vector of LogUniformDistribution (#1243)
    • Temporary hotfix for sphinx update breaking existing type annotations (#1342)
    • Fix for PyTorch Lightning v0.8.0 (#1392)
    • Fix exception handling in ChainerMNStudy.optimize (#1406)
    • Use step to calculate range of IntUniformDistribution in PyCmaSampler (#1456)
    • Avoid exploding queries with large exclusion sets (#1467)
    • Temporary fix for problem with length limit of 5000 in MLflow (#1481, thanks @PhilipMay!)
    • Fix race condition for trial number computation (#1490)
    • Fix CachedStorage skipping trial param row insertion on cache miss (#1498)
    • Fix _CachedStorage and RDBStorage distribution compatibility check race condition (#1506)
    • Fix frequent deadlock caused by conditional locks (#1514)

    Installation

    • [Backport] Add packaging in install_requires (#1561)
    • Set python_requires in setup.py to clarify supported Python version (#1350, thanks @harupy!)
    • Specify classifiers in setup.py (#1358)
    • Hotfix to avoid latest keras 2.4.0 (#1386)
    • Hotfix to avoid PyTorch Lightning 0.8.0 (#1391)
    • Relax sphinx version (#1393)
    • Update version constraints of cmaes (#1404)
    • Align sphinx-rtd-theme and Python versions used on Read the Docs to CircleCI (#1434, thanks @harupy!)
    • Remove checking and alerting installation pfnopt (#1474)
    • Avoid latest sphinx (#1485)
    • Add packaging in install_requires (#1561)

    Documentation

    • Fix experimental decorator (#1248, thanks @harupy!)
    • Create a documentation for the root namespace optuna (#1278)
    • Add missing documentation for BaseStorage.set_trial_param (#1316)
    • Fix documented exception type in BaseStorage.get_best_trial and add unit tests (#1317)
    • Add hyperlinks to key features (#1331)
    • Add .readthedocs.yml to use the same document dependencies on the CI and Read the Docs (#1354, thanks @harupy!)
    • Use Colab to demonstrate a notebook instead of nbviewer (#1360)
    • Hotfix to allow building the docs by avoiding latest sphinx (#1369)
    • Update layout and color of docs (#1370)
    • Add FAQ section about OOM (#1385)
    • Rename a title of reference to a module name (#1390)
    • Add a list of functions and classes for each module in reference doc (#1400)
    • Use .. warning:: instead of .. note:: for the deprecation decorator (#1407)
    • Always use Sphinx RTD theme (#1414)
    • Fix color of version/build in documentation sidebar (#1415)
    • Use a different font color for argument names (#1436, thanks @harupy!)
    • Move css from _templates/footer.html to _static/css/custom.css (#1439)
    • Add missing commas in FAQ (#1458)
    • Apply auto-formatting to custom.css to make it pretty and consistent (#1463, thanks @harupy!)
    • Update CONTRIBUTING.md (#1466)
    • Add missing CatalystPruningCallback in the documentation (#1468, thanks @harupy!)
    • Fix incorrect type annotations for catch (#1473, thanks @harupy!)
    • Fix double FrozenTrial (#1478)
    • Wider main content container in the documentation (#1483)
    • Add TensorBoardCallback to docs (#1486)
    • Add description about zero-based numbering of step (#1489)
    • Add links to examples from the integration references (#1507)
    • Fix broken link in plot_edf (#1510)
    • Update docs of default importance evaluator (#1524)

    Examples

    • Set timeout for relatively long-running examples (#1349)
    • Fix broken link to example and add README for AllenNLP examples (#1397)
    • Add whitespace before opening parenthesis (#1398)
    • Fix GPU run for PyTorch Ignite and Lightning examples (#1444, thanks @arisliang!)
    • Add Stable-Baselines3 RL Example (#1420, thanks @araffin!)
    • Replace suggest_*uniform in examples with suggest_(int|float) (#1470)

    Tests

    • Fix plot_param_importances test (#1328)
    • Fix incorrect test names in test_experimental.py (#1332, thanks @harupy!)
    • Simplify decorator tests (#1423)
    • Add a test for CmaEsSampler._get_trials() (#1433)
    • Use argument of pytorch_lightning.Trainer to disable checkpoint_callback (#1453)
    • Install RDB servers and their bindings for storage tests (#1497)
    • Upgrade versions of pytorch and torchvision (#1502)
    • Make HPI tests deterministic (#1505)

    Code Fixes

    • Introduces optuna._imports.try_import to DRY optional imports (#1315)
    • Friendlier error message for unsupported plotly versions (#1338)
    • Rename private modules in optuna.visualization (#1359)
    • Rename private modules in optuna.pruners (#1361)
    • Rename private modules in optuna.samplers (#1362)
    • Change logger to _trial's module variable (#1363)
    • Remove deprecated features in HyperbandPruner (#1366)
    • Add missing __init__.py files (#1367, thanks @harupy!)
    • Fix double quotes from Black formatting (#1372)
    • Rename private modules in optuna.storages (#1373)
    • Add a list of functions and classes for each module in reference doc (#1400)
    • Apply deprecation decorator (#1413)
    • Remove unnecessary exception handling for GridSampler (#1416)
    • Remove either warnings.warn() or optuna.logging.Logger.warning() from codes which have both of them (#1421)
    • Simplify usage of deprecated by omitting removed version (#1422)
    • Apply experimental decorator (#1424)
    • Fix the experimental warning message for CmaEsSampler (#1432)
    • Remove optuna.structs from MLflow integration (#1437)
    • Add type hints to slice.py (#1267, thanks @bigbird555!)
    • Add type hints to intermediate_values.py (#1268, thanks @bigbird555!)
    • Add type hints to optimization_history.py (#1269, thanks @bigbird555!)
    • Add type hints to utils.py (#1270, thanks @bigbird555!)
    • Add type hints to test_logging.py (#1284, thanks @bigbird555!)
    • Add type hints to test_chainer.py (#1286, thanks @bigbird555!)
    • Add type hints to test_keras.py (#1287, thanks @bigbird555!)
    • Add type hints to test_cma.py (#1288, thanks @bigbird555!)
    • Add type hints to test_fastai.py (#1289, thanks @bigbird555!)
    • Add type hints to test_integration.py (#1293, thanks @bigbird555!)
    • Add type hints to test_mlflow.py (#1322, thanks @bigbird555!)
    • Add type hints to test_mxnet.py (#1323, thanks @bigbird555!)
    • Add type hints to optimize.py (#1364, thanks @bigbird555!)
    • Replace suggest_*uniform in examples with suggest_(int|float) (#1470)
    • Add type hints to distributions.py (#1513)
    • Remove unnecessary FloatingPointDistributionType (#1516)

    Continuous Integration

    • Add a step to push images to Docker Hub (#1295)
    • Check code coverage in tests-python37 on CircleCI (#1348)
    • Stop building Docker images in Pull Requests (#1389)
    • Prevent doc-link from running on unrelated status update events (#1410, thanks @harupy!)
    • Avoid latest ConfigSpace where Python 3.5 is dropped (#1471)
    • Run unit tests on GitHub Actions (#1352)
    • Use circleci/python for dev image and install RDB servers (#1495)
    • Install RDB servers and their bindings for storage tests (#1497)
    • Fix dockerimage.yml format (#1511)
    • Revert #1495 and #1511 (#1512)
    • Run daily unit tests (#1515)

    Other

    • Add TestPyPI release to workflow (#1291)
    • Add PyPI release to workflow (#1306)
    • Exempt issues with no-stale label from stale bot (#1321)
    • Remove stale labels from Issues or PRs when they are updated or commented on (#1409)
    • Exempt PRs with no-stale label from stale bot (#1427)
    • Update the documentation section in CONTRIBUTING.md (#1469, thanks @harupy!)
    • Bump up version to 2.0.0 (#1525)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.0-rc0(Jul 6, 2020)

    A release candidate for the second major version of Optuna v2.0.0-rc0 is released! This release includes a lot of new features, cleaned up interfaces, performance improvements, internal refactorings and more. If you find any problems with this release candidate, please feel free to report them via GitHub Issues or Gitter.

    Highlights

    Hyperband Pruner

    The stable version of HyperbandPruner is available. It has a more simple interface and has seen performance improvement.

    Hyperparameter Importance

    The stable version of the hyperparameter importance module is available.

    • Our own implemented fANOVA, FanovaImportanceEvaluator. While the previous implementation required fanova, this new FanovaImportanceEvaluator can be used with only scikit-learn.
    • A new importance visualization function visualization.plot_param_importances.

    Built-in CMA-ES Sampler

    The stable version of CmaEsSampler is available. This new CmaEsSampler can be used with pruning, one of the Optuna’s important features, for great performance improvements.

    Grid Sampler

    The stable version of GridSampler is available and can be through an intuitive interface for users familiar with Optuna. When the entire grid is exhausted, the optimization also automatically stops so you can specify n_trials=None.

    LightGBM Tuner

    The stable version of LightGBMTuner is available. The behavior regarding verbosity option has been improved. The random seed was fixed unexpectedly if the verbosity level is not 0, but now the user given seed is used correctly.

    Experimental Features

    • New integration modules: TensorBoard integration and Catalyst integration are available as experimental.
    • A new visualization function for multi-objective optimization: multi_objective.visualization.plot_pareto_front is available as an experimental feature.
    • New methods to manually create/add trials: trial.create_trial and study.Study.add_trial are available as experimental features.

    Breaking Changes

    Several deprecated features (e.g., Study.study_id and Trial.trial_id) are removed. See #1346 for details.

    • Remove deprecated features in optuna.trial. (#1371)
    • Remove deprecated arguments from LightGBMTuner. (#1374)
    • Remove deprecated features in integration/chainermn.py. (#1375)
    • Remove deprecated features in optuna/structs.py. (#1377)
    • Remove deprecated features in optuna/study.py. (#1379)

    Several features are deprecated.

    • Deprecate optuna study optimize command. (#1384)
    • Deprecate step argument in IntLogUniformDistribution. (#1387, thanks @nzw0301!)

    Other.

    • BaseStorage.set_trial_param to return None instead of bool. (#1327)
    • Match suggest_float and suggest_int specifications on step and log arguments. (#1329)
    • BaseStorage.set_trial_intermediate_valute to return None instead of bool. (#1337)
    • Make optuna.integration.lightgbm_tuner private. (#1378)
    • Fix pruner index handling to 0-indexing. (#1430, thanks @bigbird555!)
    • Continue to allow using IntLogUnioformDistribution.step during deprecation. (#1438)

    New Features

    • Add snippet of API for integration with Catalyst. (#1056, thanks @VladSkripniuk!)
    • Add pruned trials to trials being considered in CmaEsSampler. (#1229)
    • Add pruned trials to trials being considered in SkoptSampler. (#1431)
    • Add TensorBoard integration. (#1244, thanks @VladSkripniuk!)
    • Add deprecation decorator. (#1382)
    • Add plot_pareto_front function. (#1303)
    • Remove experimental decorator from HyperbandPruner. (#1435)
    • Remove experimental decorators from hyperparameter importance (HPI) features. (#1440)
    • Remove experimental decorator from Study.stop. (#1450)
    • Remove experimental decorator from GridSampler. (#1451)
    • Remove experimental decorators from LightGBMTuner. (#1452)
    • Introducing optuna.visualization.plot_param_importances. (#1299)
    • Rename integration/CmaEsSampler to integration/PyCmaSampler. (#1325)
    • Match suggest_float and suggest_int specifications on step and log arguments. (#1329)
    • optuna.create_trial and Study.add_trial to create custom studies. (#1335)
    • Allow omitting the removal version in deprecated. (#1418)
    • Mark CatalystPruningCallback integration as experimental. (#1465)
    • Followup TensorBoard integration. (#1475)

    Enhancements

    • Support automatic stop of GridSampler. (#1026)
    • Implement fANOVA using sklearn instead of fanova. (#1106)
    • Add a caching mechanism to make NSGAIIMultiObjectiveSampler faster. (#1257)
    • Add log argument support for suggest_int of skopt integration. (#1277, thanks @nzw0301!)
    • Add read_trials_from_remote_storage method to Storage implementations. (#1298)
    • Implement log argument for suggest_int of pycma integration. (#1302)
    • Raise ImportError if bokeh version is 2.0.0 or newer. (#1326)
    • Fix the x-axis title of the hyperparameter importances plot. (#1336, thanks @harupy!)
    • BaseStorage.set_trial_intermediate_valute to return None instead of bool. (#1337)
    • Simplify log messages. (#1345)
    • Improve layout of plot_param_importances figure. (#1355)
    • Do not run the GC after every trial by default. (#1380)
    • Skip storage access if logging is disabled. (#1403)
    • Specify stacklevel for warnings.warn for more helpful warning message. (#1419, thanks @harupy!)
    • Replace DeprecationWarning with FutureWarning in @deprecated. (#1428)
    • Fix pruner index handling to 0-indexing. (#1430, thanks @bigbird555!)
    • Load environment variables in AllenNLPExecutor. (#1449)
    • Stop overwriting seed in LightGBMTuner. (#1461)

    Bug Fixes

    • Fix CMA-ES boundary constraints and initial mean vector of LogUniformDistribution. (#1243)
    • Temporary hotfix for sphinx update breaking existing type annotations. (#1342)
    • Fix for PyTorch Lightning v0.8.0. (#1392)
    • Fix exception handling in ChainerMNStudy.optimize. (#1406)
    • Use step to calculate range of IntUniformDistribution in PyCmaSampler. (#1456)

    Installation

    • Set python_requires in setup.py to clarify supported Python version. (#1350, thanks @harupy!)
    • Specify classifiers in setup.py. (#1358)
    • Hotfix to avoid latest keras 2.4.0. (#1386)
    • Hotfix to avoid PyTorch Lightning 0.8.0. (#1391)
    • Relax sphinx version. (#1393)
    • Update version constraints of cmaes. (#1404)
    • Align sphinx-rtd-theme and Python versions used on Read the Docs to CircleCI. (#1434, thanks @harupy!)
    • Remove checking and alerting installation pfnopt. (#1474)

    Documentation

    • Fix experimental decorator. (#1248, thanks @harupy!)
    • Create a documentation for the root namespace optuna. (#1278)
    • Add missing documentation for BaseStorage.set_trial_param. (#1316)
    • Fix documented exception type in BaseStorage.get_best_trial and add unit tests. (#1317)
    • Add hyperlinks to key features. (#1331)
    • Add .readthedocs.yml to use the same document dependencies on the CI and Read the Docs. (#1354, thanks @harupy!)
    • Use Colab to demonstrate a notebook instead of nbviewer. (#1360)
    • Hotfix to allow building the docs by avoiding latest sphinx. (#1369)
    • Update layout and color of docs. (#1370)
    • Add FAQ section about OOM. (#1385)
    • Rename a title of reference to a module name. (#1390)
    • Add a list of functions and classes for each module in reference doc. (#1400)
    • Use .. warning:: instead of .. note:: for the deprecation decorator. (#1407)
    • Always use Sphinx RTD theme. (#1414)
    • Fix color of version/build in documentation sidebar. (#1415)
    • Use a different font color for argument names. (#1436, thanks @harupy!)
    • Move css from _templates/footer.html to _static/css/custom.css. (#1439)
    • Add missing commas in FAQ. (#1458)
    • Apply auto-formatting to custom.css to make it pretty and consistent. (#1463, thanks @harupy!)
    • Update CONTRIBUTING.md. (#1466)
    • Add missing CatalystPruningCallback in the documentation. (#1468, thanks @harupy!)
    • Fix incorrect type annotations for catch. (#1473, thanks @harupy!)

    Examples

    • Set timeout for relatively long-running examples. (#1349)
    • Fix broken link to example and add README for AllenNLP examples. (#1397)
    • Add whitespace before opening parenthesis. (#1398)
    • Fix GPU run for PyTorch Ignite and Lightning examples. (#1444, thanks @arisliang!)

    Tests

    • Fix plot_param_importances test. (#1328)
    • Fix incorrect test names in test_experimental.py. (#1332, thanks @harupy!)
    • Simplify decorator tests. (#1423)
    • Add a test for CmaEsSampler._get_trials(). (#1433)
    • Use argument of pytorch_lightning.Trainer to disable checkpoint_callback. (#1453)

    Code Fixes

    • Introduces optuna._imports.try_import to DRY optional imports. (#1315)
    • Friendlier error message for unsupported plotly versions. (#1338)
    • Rename private modules in optuna.visualization. (#1359)
    • Rename private modules in optuna.pruners. (#1361)
    • Rename private modules in optuna.samplers. (#1362)
    • Change logger to _trial's module variable. (#1363)
    • Remove deprecated features in HyperbandPruner. (#1366)
    • Add missing __init__.py files. (#1367, thanks @harupy!)
    • Fix double quotes from Black formatting. (#1372)
    • Rename private modules in optuna.storages. (#1373)
    • Add a list of functions and classes for each module in reference doc. (#1400)
    • Apply deprecation decorator. (#1413)
    • Remove unnecessary exception handling for GridSampler. (#1416)
    • Remove either warnings.warn() or optuna.logging.Logger.warning() from codes which have both of them. (#1421)
    • Simplify usage of deprecated by omitting removed version. (#1422)
    • Apply experimental decorator. (#1424)
    • Fix the experimental warning message for CmaEsSampler. (#1432)
    • Remove optuna.structs from MLflow integration. (#1437)
    • Add type hints to slice.py. (#1267, thanks @bigbird555!)
    • Add type hints to intermediate_values.py. (#1268, thanks @bigbird555!)
    • Add type hints to optimization_history.py. (#1269, thanks @bigbird555!)
    • Add type hints to utils.py. (#1270, thanks @bigbird555!)
    • Add type hints to test_logging.py. (#1284, thanks @bigbird555!)
    • Add type hints to test_chainer.py. (#1286, thanks @bigbird555!)
    • Add type hints to test_keras.py. (#1287, thanks @bigbird555!)
    • Add type hints to test_cma.py. (#1288, thanks @bigbird555!)
    • Add type hints to test_fastai.py. (#1289, thanks @bigbird555!)
    • Add type hints to test_integration.py. (#1293, thanks @bigbird555!)
    • Add type hints to test_mlflow.py. (#1322, thanks @bigbird555!)
    • Add type hints to test_mxnet.py. (#1323, thanks @bigbird555!)
    • Add type hints to optimize.py. (#1364, thanks @bigbird555!)

    Continuous Integration

    • Add a step to push images to Docker Hub. (#1295)
    • Check code coverage in tests-python37 on CircleCI. (#1348)
    • Stop building Docker images in Pull Requests. (#1389)
    • Prevent doc-link from running on unrelated status update events. (#1410, thanks @harupy!)
    • Avoid latest ConfigSpace where Python 3.5 is dropped. (#1471)

    Other

    • Add TestPyPI release to workflow. (#1291)
    • Add PyPI release to workflow. (#1306)
    • Exempt issues with no-stale label from stale bot. (#1321)
    • Remove stale labels from Issues or PRs when they are updated or commented on. (#1409)
    • Exempt PRs with no-stale label from stale bot. (#1427)
    Source code(tar.gz)
    Source code(zip)
  • v1.5.0(Jun 1, 2020)

    This is the release note of v1.5.0.

    Highlights

    LightGBM Tuner with Cross-validation

    LightGBM tuner, which provides efficient stepwise parameter tuning for LightGBM, supports cross-validation as an experimental feature with LightGBMTunerCV. See #1156 for details.

    20200601-optuna-lightgbm-tuner-cv-small

    NSGA-II

    A sampler based on NSGA-II, a well-known multi-objective optimization algorithm, is now available as the default multi-objective sampler. The following benchmark result, on the ZDT1 function, shows that NSGA-II outperforms random sampling. Please refer to #1163 for further details.

    20200601-optuna-nsga2

    Mean Decrease Impurity (MDI) Hyperparameter Importance Evaluator

    The default hyperparameter importance evaluator is replaced with a naive mean decrease impurity algorithm. It uses the random forest feature importances in Scikit-learn and therefore requires this package. See #1253 for more details.

    20200601-optuna-feature-importances

    optuna.TrialPruned Alias

    optuna.TrialPruned is a new alias for optuna.exceptions.TrialPruned. It is now possible to write shorter and more readable code when pruning trials. See #1204 for details.

    New Features

    • Add a method to stop study.optimize. (#1025)
    • Use --study-name instead of --study in CLI commands. (#1079, thanks @seungjaeryanlee!)
    • Add cross-validation support for LightGBMTuner. (#1156)
    • Add NSGA-II based multi-objective sampler. (#1163)
    • Implement log argument for suggest_int. (#1201, thanks @nzw0301!)
    • Import optuna.exceptions.TrialPruned in __init__.py. (#1204)
    • Mean Decrease Impurity (MDI) hyperparameter importance evaluator. (#1253)

    Enhancements

    • Add storage cache. (#1140)
    • Fix _get_observation_pairs for conditional parameters. (#1166, thanks @y0z!)
    • Alternative implementation to hide the interface so that all samplers can use HyperbandPruner. (#1196)
    • Fix for O(N) queries being produced if even a single trial is cached. (#1259, thanks @zzorba!)
    • Move caching mechanism from RDBStorage to _CachedStorage. (#1263)
    • Cache study-related info in _CachedStorage. (#1264)
    • Move deep-copies for optimization speed improvement. (#1274)
    • Implement log argument for suggest_int of ChainerMN integration. (#1275, thanks @nzw0301!)
    • Add warning when Trial.suggest_int modifies high. (#1276)
    • Input validation for IntLogUniformDistribution. (#1279, thanks @himkt!)

    Bug Fixes

    • Support multiple studies in InMemoryStorage. (#1228)
    • Fix out of bounds error of CMA-ES. (#1231)
    • Fix sklearn - skopt version incompatibility. (#1236)
    • Fix a type casting error when using CmaEsSampler. (#1240)
    • Upgrade the version of cmaes. (#1242)

    Documentation

    • Rename test_ to valid_ in docs and docstring. (#1167, thanks @himkt!)
    • Add storage specification to BaseStorage class doc. (#1174)
    • Add docstring to BaseStorage method interfaces. (#1175)
    • Add an explanation of failed trials from samplers' perspective. (#1214)
    • Add LightGBMTuner reference. (#1217)
    • Modifying code examples to include training data. (#1221)
    • Ask optuna tag in Stack Overflow question. (#1249)
    • Add notes for auto argument values in HyperbandPruner and SuccessiveHalvingPruner. (#1252)
    • Add description of observation_key in XGBoostPruningCallback. (#1260)
    • Cosmetic fixes to documentation in BaseStorage. (#1261)
    • Modify documentation and fix file extension in the test for AllenNLP integration. (#1265, thanks @himkt!)
    • Fix experimental decorator to decorate a class properly. (#1285, thanks @harupy!)

    Examples

    • Add pruning to PyTorch example. (#1119)
    • Use dump_best_config in example. (#1225, thanks @himkt!)
    • Stop suggesting using deprecated option in AllenNLP example. (#1282)
    • Add link to regression example in the header of keras_integration.py. (#1301, thanks @zishiwu123!)

    Tests

    • Increase test coverage of storage tests for single worker cases. (#1191)
    • Fix sklearn - skopt version incompatibility. (#1236)

    Code Fixes

    • Dissect trial.py. (#1210, thanks @himkt!)
    • Rename trial/*.py to trial/_*.py. (#1239)
    • Add type hints to contour.py. (#1254, thanks @bigbird555!)
    • Consistent Hyperband bracket ID variable names. (#1262)
    • Apply minor code fix to #1201. (#1273)
    • Avoid mutable default argument in AllenNLPExecutor.__init__. (#1280)
    • Reorder arguments of Trial.suggest_float. (#1292)
    • Fix unintended change on calculating n_brackets in HyperbandPruner. (#1294)
    • Add experimental decorator to LightGBMTuner and LightGBMTunerCV. (#1305)

    Continuous Integration

    • Add GitHub action that posts link to documentation. (#1247, thanks @harupy!)
    • Add a workflow to create distribution packages. (#1283)
    • Stop setting environment variables for GitHub Package. (#1296)
    Source code(tar.gz)
    Source code(zip)
  • v1.4.0(May 11, 2020)

    This is the release note of v1.4.0.

    Highlights

    Experimental Multi-objective Optimization

    Multi-objective optimization is available as an experimental feature. Currently, it only provides random sampling, but it will be continuously developed in the following releases. Feedback is highly welcomed. See #1054 for details.

    Enhancement of Storages

    A new Redis-based storage is available. It is a fast and flexible in-memory storage. It can also persist studies on-disk without having to configure relational databases. It is still an experimental feature, and your feedback is highly welcomed. See #974 for details.

    Performance tuning has been applied to RDBStorage. For instance, it speeds up creating study lists by over 3000 times (i.e., 7 minutes to 0.13 seconds). See #1109 for details.

    Experimental Integration Modules for MLFlow and AllenNLP

    A new callback function is provided for MLFlow users. It reports Optuna’s optimization results (i.e., parameter values and metric values) to MLFlow. See #1028 for details.

    A new integration module for AllenNLP is available. It enables you to reuse your jsonnet configuration files for hyperparameter tuning. See #1086 for details.

    Breaking Changes

    • Delete the argument is_higher_better from TensorFlowPruningHook. (#1083, thanks @nuka137!)
    • Applied @abc.abstractmethod decorator to the abstract methods of BaseTrial and fixed ChainerMNTrial. (#1087, thanks @gorogoroumaru!)
    • Input validation for LogUniformDistribution for negative domains. (#1099)

    New Features

    • Added RedisStorage class to support storing activity on Redis. (#974, thanks @pablete!)
    • Add MLFlow integration callback. (#1028, thanks @PhilipMay!)
    • Add study argument to optuna.integration.lightgbm.LightGBMTuner. (#1032)
    • Support multi-objective optimization. (#1054)
    • Add duration into FrozenTrial and DataFrame. (#1071)
    • Support parallel execution of LightGBMTuner. (#1076)
    • Add number property to FixedTrial and BaseTrial. (#1077)
    • Support DiscreteUniformDistribution in suggest_float. (#1081, thanks @himkt!)
    • Add AllenNLP integration. (#1086, thanks @himkt!)
    • Add an argument of max_resource to HyperbandPruner and deprecate n_brackets. (#1138)
    • Improve the trial allocation algorithm of HyperbandPruner. (#1141)
    • Add IntersectionSearchSpace to speed up the search space calculation. (#1142)
    • Implement AllenNLP config exporter to save training config with best_params in study. (#1150, thanks @himkt!)
    • Remove redundancy from HyperbandPruner by deprecating min_early_stopping_rate_low. (#1159)
    • Add pruning interval for KerasPruningCallback. (#1161, thanks @VladSkripniuk!)
    • suggest_float with step in multi_objective. (#1205, thanks @nzw0301!)

    Enhancements

    • Reseed sampler's random seed generator in Study. (#968)
    • Apply lazy import for optuna.dashboard. (#1074)
    • Applied @abc.abstractmethod decorator to the abstract methods of BaseTrial and fixed ChainerMNTrial. (#1087, thanks @gorogoroumaru!)
    • Refactoring of StudyDirection. (#1090)
    • Refactoring of StudySummary. (#1095)
    • Refactoring of TrialState and FrozenTrial. (#1101)
    • Apply lazy import for optuna.structs to raise DeprecationWarning when using. (#1104)
    • Optimize get_all_strudy_summaries function for RDB storages. (#1109)
    • single() returns True when step or q is greater than high-low. (#1111)
    • Return trial_id at study._append_trial(). (#1114)
    • Use scipy for sampling from truncated normal in TPE sampler. (#1122)
    • Remove unnecessary deep-copies. (#1135)
    • Remove unnecessary shape-conversion and a loop from TPE sampler. (#1145)
    • Support Optuna callback functions at LightGBM Tuner. (#1158)
    • Fix the default value of max_resource to HyperbandPruner. (#1171)
    • Fix the method to calculate n_brackets in HyperbandPruner. (#1188)

    Bug Fixes

    • Support Copy-on-Write for thread safety in in-memory storage. (#1139)
    • Fix the range of sampling in TPE sampler. (#1143)
    • Add figure title to contour plot. (#1181, thanks @harupy!)
    • Raise ValueError that is not raised. (#1208, thanks @harupy!)
    • Fix a bug that occurs when multiple callbacks are passed to MultiObjectiveStudy.optimize. (#1209)

    Installation

    • Set version constraint on the cmaes library. (#1082)
    • Stop installing PyTorch Lightning if Python version is 3.5. (#1193)
    • Install PyTorch without CPU option on macOS. (#1215, thanks @himkt!)

    Documentation

    • Add an Example and Variable Explanations to HyperBandPruner. (#972)
    • Add a reference of cli in the sphinx docs. (#1065)
    • Fix docstring on optuna/integration/*.py. (#1070, thanks @nuka137!)
    • Fix docstring on optuna/distributions.py. (#1089)
    • Remove duplicate description of FrozenTrial.distributions. (#1093)
    • Optuna Read the Docs top page addition. (#1098)
    • Update the outputs of some examples in first.rst. (#1100, thanks @A03ki!)
    • Fix plot_intermediate_values example. (#1103)
      • Thanks @barneyhill for creating the original pull request #1050!
    • Use latest sphinx version on RTD. (#1108)
    • Add class doc to TPESampler. (#1144)
    • Fix a markup in pruner page. (#1172, thanks @nzw0301!)
    • Add examples for doctest to optuna/storages/rdb/storage.py. (#1212, thanks @nuka137!)

    Examples

    • Update PyTorch Lightning example for 0.7.1 version. (#1013, thanks @festeh!)
    • Add visualization example script. (#1085)
    • Update pytorch_simple.py to suggest lr from suggest_loguniform. (#1112)
    • Rename test datasets in examples. (#1164, thanks @VladSkripniuk!)
    • Fix the metric name in KerasPruningCallback example. (#1218)

    Tests

    • Add TPE tests. (#1126)
    • Bundle allennlp test data in the repository. (#1149, thanks @himkt!)
    • Add test for deprecation error of HyperbandPruner. (#1189)
    • Add examples for doctest to optuna/storages/rdb/storage.py. (#1212, thanks @nuka137!)

    Code Fixes

    • Update type hinting of GridSampler.__init__. (#1102)
    • Replace mock with unittest.mock. (#1121)
    • Remove unnecessary is_log logic in TPE sampler. (#1123)
    • Remove redundancy from HyperbandPruner by deprecating min_early_stopping_rate_low. (#1159)
    • Use Trial.system_attrs to store LightGBMTuner's results. (#1177)
    • Remove _TimeKeeper and use timeout of Study.optimize. (#1179)
    • Define key names of system_attrs as variables in LightGBMTuner. (#1192)
    • Minor fixes. (#1203, thanks @nzw0301!)
    • Move colorlog after threading. (#1211, thanks @himkt!)
    • Pass IntUniformDistribution's step to UniformIntegerHyperparameter's q. (#1222, thanks @nzw0301!)

    Continuous Integration

    • Create dockerimage.yml. (#901)
    • Add notebook verification for visualization examples. (#1088)
    • Avoid installing torch with CUDA in CI. (#1118)
    • Avoid installing torch with CUDA in CI by locking version. (#1124)
    • Constraint llvmlite version for Python 3.5. (#1152)
    • Skip GitHub Actions builds on forked repositories. (#1157, thanks @himkt!)
    • Fix --cov option for pytest. (#1187, thanks @harupy!)
    • Unique GitHub Actions step name. (#1190)

    Other

    • GitHub Actions to automatically label PRs. (#1068)
    • Revert "GitHub Actions to automatically label PRs.". (#1094)
    • Remove settings for yapf. (#1110, thanks @himkt!)
    • Update pull request template. (#1113)
    • GitHub Actions to automatically label stale issues and PRs. (#1116)
    • Upgrade actions/stale to never close ticket. (#1131)
    • Run actions/stale on weekday mornings Tokyo time. (#1132)
    • Simplify pull request template. (#1147)
    • Use major version instead of semver for stale. (#1173, thanks @hross!)
    • Do not label contribution-welcome and bug issues as stale. (#1216)
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Apr 2, 2020)

    This is the release note of v1.3.0.

    Highlights

    Experimental CMA-ES

    A new built-in CMA−ES sampler is available. It is still an experimental feature, but we recommend trying it because it is much faster than the existing CMA-ES sampler from the integration submodule. See #920 for details.

    Experimental Hyperparameter Importance

    Hyperparameter importances can be evaluated using optuna.importance.get_param_importances. This is an experimental feature that currently requires fanova. See #946 for details.

    Breaking Changes

    Changes to the Per-Trial Log Format

    The per-trial log now shows the parameter configuration for the last trial instead of the so far best trial. See #965 for details.

    New Features

    • Add step parameter on IntUniformDistribution. (#910, thanks @hayata-yamamoto!)
    • Add CMA-ES sampler. (#920)
    • Add experimental hyperparameter importance feature. (#946)
    • Implement ThresholdPruner. (#963, thanks @himkt!)
    • Add initial implementation of suggest_float. (#1021, thanks @himkt!)

    Enhancements

    • Log parameters from last trial instead of best trial. (#965)
    • Fix overlap of labels in parallel coordinate plots. (#979, thanks @VladSkripniuk!)

    Bug Fixes

    • Support metric aliases for LightGBM Tuner #960. (#977, thanks @VladSkripniuk!)
    • Use SELECT FOR UPDATE while updating trial state. (#1014)

    Documentation

    • Add FAQ entry on how to save/resume studies using in-memory storage. (#869, thanks @victorhcm!)
    • Fix pruning n_warmup_steps documentation. (#980, thanks @PhilipMay!)
    • Apply gray background for code-block:: console. (#983)
    • Add syntax highlighting and fixed spelling variants. (#990, thanks @daikikatsuragawa!)
    • Add examples for doctest to optuna/samplers/*.py and optuna/integration/*.py. (#999, thanks @nuka137!)
    • Embed plotly figures in documentation. (#1003, thanks @harupy!)
    • Improve callback docs for optimize function. (#1016, thanks @PhilipMay!)
    • Fix docstring on optuna/integration/tensorflow.py. (#1019, thanks @nuka137!)
    • Fix docstring in RDBStorage. (#1022)
    • Fix direction in doctest. (#1036, thanks @himkt!)
    • Add a link to the AllenNLP example in README.md. (#1040)
    • Apply document code formatting with Black. (#1044, thanks @PhilipMay!)
    • Remove obsolete description from contribution guidelines. (#1047)
    • Improve contribution guidelines. (#1052)
    • Document intersection_search_space parameters. (#1053)
    • Add descriptions to cli commands. (#1064)

    Examples

    • Add allennlp example. (#949, thanks @himkt!)

    Code Fixes

    • Add number field in trials table. (#939)
    • Implement some methods almost compatible with Scikit-learn private methods. (#952, thanks @himkt!)
    • Use function annotation syntax for type hints. (#989, #993, #996, thanks @bigbird555!)
    • Add RDB storage number column comment. (#1006)
    • Sort dependencies in setup.py (fix #1005). (#1007, thanks @VladSkripniuk!)
    • Fix mypy==0.770 errors. (#1009)
    • Fix a validation error message. (#1010)
    • Remove python version check. (#1023)
    • Fix a typo on optuna/integration/pytorch_lightning.py. (#1024, thanks @nai62!)
    • Add a todo comment in GridSampler. (#1027)
    • Change formatter from autopep8 to black (string normalization separate commit). (#1030)
    • Update module import of sklearn.utils.safe_indexing for scikit-learn==0.24. (#1031, thanks @kuroko1t!)
    • Fix black error. (#1034)
    • Remove duplicate import of FATAL. (#1035)
    • Fix import order and plot label truncation. (#1046)

    Continuous Integration

    • Add version restriction to pytorch_lightning and bokeh. (#998)
    • Relax PyTorch Lightning version constraint to fix daily CI build. (#1002)
    • Store documentation as an artifact on CircleCI. (#1008, thanks @harupy!)
    • Introduce GitHub Action to execute CI for examples. (#1011)
    • Ignore allennlp in Python3.5 and Python3.8. (#1042, thanks @himkt!)
    • Remove daily CircleCI builds. (#1048)

    Other

    • Refactor Mypy configuration into setup.cfg. (#985, thanks @pablete!)
    • Ignore .pytest_cache. (#991, thanks @harupy!)
    Source code(tar.gz)
    Source code(zip)
Randomizes the warps in a stock pokeemerald repo.

pokeemerald warp randomizer Randomizes the warps in a stock pokeemerald repo. Usage Instructions Install networkx and matplotlib via pip3 or similar.

Max Thomas 6 Mar 17, 2022
Keywords : Streamlit, BertTokenizer, BertForMaskedLM, Pytorch

Next Word Prediction Keywords : Streamlit, BertTokenizer, BertForMaskedLM, Pytorch 🎬 Project Demo ✔ Application is hosted on Streamlit. You can see t

Vivek7 3 Aug 26, 2022
Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts

t5-japanese Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts. The following is a list of models that

Kimio Kuramitsu 1 Dec 13, 2021
Learning hierarchical attention for weakly-supervised chest X-ray abnormality localization and diagnosis

Hierarchical Attention Mining (HAM) for weakly-supervised abnormality localization This is the official PyTorch implementation for the HAM method. Pap

Xi Ouyang 22 Jan 02, 2023
Differential Privacy for Heterogeneous Federated Learning : Utility & Privacy tradeoffs

Differential Privacy for Heterogeneous Federated Learning : Utility & Privacy tradeoffs In this work, we propose an algorithm DP-SCAFFOLD(-warm), whic

19 Nov 10, 2022
CoReD: Generalizing Fake Media Detection with Continual Representation using Distillation (ACMMM'21 Oral Paper)

CoReD: Generalizing Fake Media Detection with Continual Representation using Distillation (ACMMM'21 Oral Paper) (Accepted for oral presentation at ACM

Minha Kim 1 Nov 12, 2021
Customer Segmentation using RFM

Customer-Segmentation-using-RFM İş Problemi Bir e-ticaret şirketi müşterilerini segmentlere ayırıp bu segmentlere göre pazarlama stratejileri belirlem

Nazli Sener 7 Dec 26, 2021
Pretraining on Dynamic Graph Neural Networks

Pretraining on Dynamic Graph Neural Networks Our article is PT-DGNN and the code is modified based on GPT-GNN Requirements python 3.6 Ubuntu 18.04.5 L

7 Dec 17, 2022
A Topic Modeling toolbox

Topik A Topic Modeling toolbox. Introduction The aim of topik is to provide a full suite and high-level interface for anyone interested in applying to

Anaconda, Inc. (formerly Continuum Analytics, Inc.) 93 Dec 01, 2022
Codes and Data Processing Files for our paper.

Code Scripts and Processing Files for EEG Sleep Staging Paper 1. Folder Tree ./src_preprocess (data preprocessing files for SHHS and Sleep EDF) sleepE

Chaoqi Yang 18 Dec 12, 2022
Deep Q-Learning Network in pytorch (not actively maintained)

pytoch-dqn This project is pytorch implementation of Human-level control through deep reinforcement learning and I also plan to implement the followin

Hung-Tu Chen 342 Jan 01, 2023
SOLOv2 on onnx & tensorRT

SOLOv2.tensorRT: NOTE: code based on WXinlong/SOLO add support to TensorRT inference onnxruntime tensorRT full_dims and dynamic shape postprocess with

47 Nov 26, 2022
[ICSE2020] MemLock: Memory Usage Guided Fuzzing

MemLock: Memory Usage Guided Fuzzing This repository provides the tool and the evaluation subjects for the paper "MemLock: Memory Usage Guided Fuzzing

Cheng Wen 54 Jan 07, 2023
Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)

GS-WGAN This repository contains the implementation for GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators (NeurIPS

46 Nov 09, 2022
Machine Learning University: Accelerated Computer Vision Class

Machine Learning University: Accelerated Computer Vision Class This repository contains slides, notebooks, and datasets for the Machine Learning Unive

AWS Samples 1.3k Dec 28, 2022
This is a demo app to be used in the video streaming applications

MoViDNN: A Mobile Platform for Evaluating Video Quality Enhancement with Deep Neural Networks MoViDNN is an Android application that can be used to ev

ATHENA Christian Doppler (CD) Laboratory 7 Jul 21, 2022
Reproducing code of hair style replacement method from Barbershorp.

Barbershorp Reproducing code of hair style replacement method from Barbershorp. Also reproduces II2S, an improved version of Image2StyleGAN. Requireme

1 Dec 24, 2021
Code & Models for 3DETR - an End-to-end transformer model for 3D object detection

3DETR: An End-to-End Transformer Model for 3D Object Detection PyTorch implementation and models for 3DETR. 3DETR (3D DEtection TRansformer) is a simp

Facebook Research 487 Dec 31, 2022
GEP (GDB Enhanced Prompt) - a GDB plug-in for GDB command prompt with fzf history search, fish-like autosuggestions, auto-completion with floating window, partial string matching in history, and more!

GEP (GDB Enhanced Prompt) GEP (GDB Enhanced Prompt) is a GDB plug-in which make your GDB command prompt more convenient and flexibility. Why I need th

Alan Li 23 Dec 21, 2022