Open source platform for the machine learning lifecycle

Overview

MLflow: A Machine Learning Lifecycle Platform

MLflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models. MLflow offers a set of lightweight APIs that can be used with any existing machine learning application or library (TensorFlow, PyTorch, XGBoost, etc), wherever you currently run ML code (e.g. in notebooks, standalone applications or the cloud). MLflow's current components are:

  • MLflow Tracking: An API to log parameters, code, and results in machine learning experiments and compare them using an interactive UI.
  • MLflow Projects: A code packaging format for reproducible runs using Conda and Docker, so you can share your ML code with others.
  • MLflow Models: A model packaging format and tools that let you easily deploy the same model (from any ML library) to batch and real-time scoring on platforms such as Docker, Apache Spark, Azure ML and AWS SageMaker.
  • MLflow Model Registry: A centralized model store, set of APIs, and UI, to collaboratively manage the full lifecycle of MLflow Models.

Latest Docs Labeling Action Status Examples Action Status Examples Action Status Latest Python Release Latest Conda Release Latest CRAN Release Maven Central Apache 2 License Total Downloads Slack

Installing

Install MLflow from PyPI via pip install mlflow

MLflow requires conda to be on the PATH for the projects feature.

Nightly snapshots of MLflow master are also available here.

Documentation

Official documentation for MLflow can be found at https://mlflow.org/docs/latest/index.html.

Community

For help or questions about MLflow usage (e.g. "how do I do X?") see the docs or Stack Overflow.

To report a bug, file a documentation issue, or submit a feature request, please open a GitHub issue.

For release announcements and other discussions, please subscribe to our mailing list ([email protected]) or join us on Slack.

Running a Sample App With the Tracking API

The programs in examples use the MLflow Tracking API. For instance, run:

python examples/quickstart/mlflow_tracking.py

This program will use MLflow Tracking API, which logs tracking data in ./mlruns. This can then be viewed with the Tracking UI.

Launching the Tracking UI

The MLflow Tracking UI will show runs logged in ./mlruns at http://localhost:5000. Start it with:

mlflow ui

Note: Running mlflow ui from within a clone of MLflow is not recommended - doing so will run the dev UI from source. We recommend running the UI from a different working directory, specifying a backend store via the --backend-store-uri option. Alternatively, see instructions for running the dev UI in the contributor guide.

Running a Project from a URI

The mlflow run command lets you run a project packaged with a MLproject file from a local path or a Git URI:

mlflow run examples/sklearn_elasticnet_wine -P alpha=0.4

mlflow run https://github.com/mlflow/mlflow-example.git -P alpha=0.4

See examples/sklearn_elasticnet_wine for a sample project with an MLproject file.

Saving and Serving Models

To illustrate managing models, the mlflow.sklearn package can log scikit-learn models as MLflow artifacts and then load them again for serving. There is an example training application in examples/sklearn_logistic_regression/train.py that you can run as follows:

$ python examples/sklearn_logistic_regression/train.py
Score: 0.666
Model saved in run <run-id>

$ mlflow models serve --model-uri runs:/<run-id>/model

$ curl -d '{"columns":[0],"index":[0,1],"data":[[1],[-1]]}' -H 'Content-Type: application/json'  localhost:5000/invocations

Contributing

We happily welcome contributions to MLflow. Please see our contribution guide for details.

Comments
  • Tracking Server not working as a proxy for localhost

    Tracking Server not working as a proxy for localhost

    Willingness to contribute

    No. I cannot contribute a bug fix at this time.

    MLflow version

    1.25.1

    System information

    • localhost: GitBash
    • Remote Host: Kubernetes POD
    • Artifact Destination: AWS S3
    • Python 3.7.2

    Describe the problem

    I am having similar issue to what was posted here: https://github.com/mlflow/mlflow/issues/5659 Unfortunately, the solution provided hasn't worked for me. When running a modeling script on the Remote Host, the artifacts get stored in S3 properly. When I run from localhost, I get:

    botocore.exceptions.NoCredentialsError: Unable to locate credentials

    I have determined that it expects to use local creds when running on localhost instead of the ones on the Tracking Server. Hoping somebody has more suggestions of things to try or look for.

    Tracking information

    Tracking & Artifact uri look as expected. Not sharing for security reasons.

    Code to reproduce issue

    mlflow.set_tracking_uri("masked")
    
    mlflow.set_experiment("masked")
    with mlflow.start_run():
    .
    .
    .
        plt.savefig('plot.png')
        print(mlflow.get_tracking_uri())
        print(mlflow.get_artifact_uri())
        mlflow.log_artifact("plot.png")
    

    Other info / logs

    botocore.exceptions.NoCredentialsError: Unable to locate credentials

    What component(s) does this bug affect?

    • [X] area/artifacts: Artifact stores and artifact logging
    • [ ] area/build: Build and test infrastructure for MLflow
    • [ ] area/docs: MLflow documentation pages
    • [ ] area/examples: Example code
    • [ ] area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
    • [ ] area/models: MLmodel format, model serialization/deserialization, flavors
    • [ ] area/pipelines: Pipelines, Pipeline APIs, Pipeline configs, Pipeline Templates
    • [ ] area/projects: MLproject format, project running backends
    • [ ] area/scoring: MLflow Model server, model deployment tools, Spark UDFs
    • [ ] area/server-infra: MLflow Tracking server backend
    • [ ] area/tracking: Tracking Service, tracking client APIs, autologging

    What interface(s) does this bug affect?

    • [ ] area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
    • [ ] area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
    • [ ] area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
    • [ ] area/windows: Windows support

    What language(s) does this bug affect?

    • [ ] language/r: R APIs and clients
    • [ ] language/java: Java APIs and clients
    • [ ] language/new: Proposals for new client languages

    What integration(s) does this bug affect?

    • [ ] integrations/azure: Azure and Azure ML integrations
    • [ ] integrations/sagemaker: SageMaker integrations
    • [ ] integrations/databricks: Databricks integrations
    bug area/artifacts 
    opened by njanopoulos 68
  • [BUG] IsADirectoryError while selecting S3 artifact in the UI

    [BUG] IsADirectoryError while selecting S3 artifact in the UI

    System information

    • Have I written custom code (as opposed to using a stock example script provided in MLflow): yes
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): centos 7.4
    • MLflow installed from (source or binary): binary
    • MLflow version (run mlflow --version): 1.9.0
    • Python version: 3.7.3
    • npm version, if running the dev UI: NA
    • Exact command to reproduce:
    • S3 packages: botocore 1.14.14, boto3 1.11.14

    Describe the problem

    My mlflow server runs on centos with Postgresql backend storage and S3 (minio) artifact storage: mlflow server --backend-store-uri postgresql://<pg-location-and-credentials> --default-artifact-root s3://mlflow -h 0.0.0.0 -p 8000 I set all S3 relevant env vars: MLFLOW_S3_ENDPOINT_URL, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION I've successfully ran several runs from other machine against this server:

    • Runs all finished OK, with params, metrics and artifacts.

    • Postrgesql mlflow tabels were updated accordingly.

    • All arifacts were stored in minio bucket as expected and I can display and download them by minio browser.

    However, when I select any artifact in the UI, I get Internal Server Error in the browser.

    Other info / logs

    In the mlflow server I see the following error:

    ERROR mlflow.server: Exception on /get-artifact [GET]
    # I skip most of the traceback
    File "<python-path>/site-packages/mlflow/serverhandlers.py": line 180, in get_artifact_handler
        return send_file(filename, mimetype='text/plain', as_attachment=True)
    File "<python-path>/site-packages/flask/helpers.py", line 629, in send_file
        file = open(filename, "rb")
    IsADirectoryError: [Error 21] Is a directory: '/tmp/<generated-name>/<my-file>'
    

    Actually, there is a '/tmp/<generated-name>/' which is really a directory and not a file! This folder contains another directory with a generated name, and inside there's nothing! I didn't find any similar error regarding mlflow and s3. What's wrong?

    What component(s), interfaces, languages, and integrations does this bug affect?

    Components

    • [x] area/artifacts: Artifact stores and artifact logging
    bug area/artifacts priority/awaiting-more-evidence 
    opened by amiryi365 51
  • [FR] Proxy uploading of artifacts through the tracking service

    [FR] Proxy uploading of artifacts through the tracking service

    MLflow Roadmap Item

    This is an MLflow Roadmap item that has been prioritized by the MLflow maintainers.

    Proposal Summary

    As per request from @smurching I'm leaving it here as a topic for future discussion :)

    Currently the mlflow workflow describes sending artifacts to a location specified by the tracking service. For example, the service will return an Amazon S3 location in which the client may place the artifacts. This, however, means the clients will have to know about S3 and must require access to the bucket - requiring specific IAM roles to be configured for client access, which may or may not be feasible.

    Furthermore, when looking at it from a deployment pipeline (CI/CD) perspective I would expect my client to be considerably dumb and simply end the model training, fitting etc. with something like mlflow.deploy(supermodel). Which would kick-off a proper CI/CD process and essentially ship my model.

    For a regular (?) models, this should be pretty straightforward as they range in the order of MBs, right. It'll probably get interesting with artifacts sizing several GBs, though scaling a (new/additional?) component of the tracking service could be a potential fix for that.

    Anyway, I'm very interested about the general and MLflow-specific view on this topic and where it should or could go :)

    enhancement area/artifacts needs design 
    opened by harmw 44
  • [FR] Autologging functionality for scikit-learn

    [FR] Autologging functionality for scikit-learn

    Describe the proposal

    Provide a clear high-level description of the feature request in the following sections. Feature requests that are likely to be accepted:

    It'd be nice to add an mlflow.sklearn.autolog() API for automatically logging metrics, params & models generated via scikit-learn.

    Note that I'm personally not particularly familiar with the scikit-learn APIs, so I'd welcome feedback on the proposal below.

    MVP API Proposal

    We could patch the BaseEstimator.fit method to log the params of the model being fit (estimator params are accessible via get_params) and also log the fitted model itself.

    We should take care to ensure the UX is reasonable when working with scikit-learn Pipelines, which allow for defining DAGs of estimators. There are a few options here:

    1. The sub-estimators simply log nothing (simple to achieve, but may result in too-little logged information).
    2. Log params of the sub-estimators comprising a Pipeline under the parent run, but do not log the fitted sub-estimators themselves (log only the enclosing Pipeline model). Note that sub-estimator params can be obtained by passing deep=True to Estimator.get_params.
    3. The sub-estimators comprising a Pipeline & their params are logged under child runs of the parent run
    4. The sub-estimators comprising a Pipeline & their params are logged under the parent run (IMO this option is a non-starter, as the parent run could become very polluted with models/params)

    For example:

    # Logs information about `pipeline` under the run named "parent". In the custom
    # patched logic that runs when we call `pipeline.fit()`, we can set an 
    # `mlflow.sklearn.runContainsPipeline` tag on "parent" indicating that it's a run
    # containing a Pipeline. As a result, our custom logic for 
    # fitting our linear regressor knows to create a child run to which to log its 
    # params (or simply not log anything), by virtue of checking that the
    # `mlflow.sklearn.runContainsPipeline` tag is set on the current active run.
    with mlflow.start_run(run_name="parent"):
       pipeline = Pipeline([standard_scaler, linear_regression])
       pipeline.fit(train)
    

    Motivation

    scikit-learn is a popular ML library, and it'd be a big value-add to make it easy for users to add MLflow tracking to their existing scikit-learn code.

    Proposed Changes

    For user-facing changes, what APIs are you proposing to add or modify? What code paths will need to be modified? See above - we propose adding a new mlflow.sklearn.autolog API

    We can add the definition of the new autolog API in https://github.com/mlflow/mlflow/blob/master/mlflow/sklearn.py, and unit tests under mlflow/tests/sklearn/test_sklearn_autologging.py. See this PR: https://github.com/mlflow/mlflow/pull/1601 as an example of how the same was done for Keras.

    enhancement good first issue area/tracking priority/important-soon 
    opened by smurching 38
  • BUG: fixed model serve fail with HTTP 400 on Bad Request.

    BUG: fixed model serve fail with HTTP 400 on Bad Request.

    Signed-off-by: Andrei Batomunkuev [email protected]

    What changes are proposed in this pull request?

    Solves #4897.

    Please refer to my root cause analysis of the issue. I have explained that the root cause of the issue might be an incorrect passing value MALFORMED REQUEST to the argument error_code in function _handle_serving_error.

    To fix this, I have changed the passing value argument of error_code from MALFORMED REQUEST to BAD REQUEST.

    The HyperText Transfer Protocol (HTTP) 500 Internal Server Error server error response code indicates that the server encountered an unexpected condition that prevented it from fulfilling the request. Source

    The HyperText Transfer Protocol (HTTP) 400 Bad Request response status code indicates that the server cannot or will not process the request due to something that is perceived to be a client error (e.g., malformed request syntax, invalid request message framing, or deceptive request routing). Source

    To reproduce the error lets run the following command from the issue

    curl -i -X POST -d "{\"data\":0.0199132142]}" -H "Content-Type: application/json" http://localhost:5000/invocations
    

    In our case, the error should be 400 Bad Request since we are passing an incorrect syntax/value of json object. So, the error message should reflect the corresponding issue.

    How is this patch tested?

    Currently, the unit tests test_scoring_server.py fail since we have updated the value to BAD REQUEST. So, we need to either fix those existing tests or write a new unit test. I will need some help in fixing unit tests, I am a little bit confused which unit test belong to this issue.

    Release Notes

    Is this a user-facing change?

    • [ ] No. You can skip the rest of this section.
    • [x] Yes. Give a description of this change to be included in the release notes for MLflow users.

    When the users passes an incorrect syntax of json object to the server, the server should display the following message:

    HTTP/1.1 400 BAD REQUEST
    Server: gunicorn
    Date: Thu, 04 Nov 2021 06:29:13 GMT
    Connection: close
    Content-Type: application/json
    Content-Length: 1030
    
    {"error_code": "BAD_REQUEST", "message": "Failed to parse input from JSON. Ensure that input is a valid JSON formatted string.",....
    

    Example of the incorrect json syntax value:

    curl -i -X POST -d "{\"data\":0.0199132142]}" -H "Content-Type: application/json" http://localhost:5000/invocations
    

    What component(s), interfaces, languages, and integrations does this PR affect?

    Components

    • [ ] area/artifacts: Artifact stores and artifact logging
    • [ ] area/build: Build and test infrastructure for MLflow
    • [ ] area/docs: MLflow documentation pages
    • [ ] area/examples: Example code
    • [ ] area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
    • [ ] area/models: MLmodel format, model serialization/deserialization, flavors
    • [ ] area/projects: MLproject format, project running backends
    • [x] area/scoring: MLflow Model server, model deployment tools, Spark UDFs
    • [ ] area/server-infra: MLflow Tracking server backend
    • [ ] area/tracking: Tracking Service, tracking client APIs, autologging

    Interface

    • [ ] area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
    • [ ] area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
    • [ ] area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
    • [ ] area/windows: Windows support

    Language

    • [ ] language/r: R APIs and clients
    • [ ] language/java: Java APIs and clients
    • [ ] language/new: Proposals for new client languages

    Integrations

    • [ ] integrations/azure: Azure and Azure ML integrations
    • [ ] integrations/sagemaker: SageMaker integrations
    • [ ] integrations/databricks: Databricks integrations

    How should the PR be classified in the release notes? Choose one:

    • [ ] rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
    • [ ] rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
    • [ ] rn/feature - A new user-facing feature worth mentioning in the release notes
    • [x] rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
    • [ ] rn/documentation - A user-facing documentation change worth mentioning in the release notes
    rn/bug-fix area/scoring 
    opened by abatomunkuev 34
  • [SETUP-BUG] Mlflow is not writting to the s3 bucket

    [SETUP-BUG] Mlflow is not writting to the s3 bucket

    Thank you for submitting an issue. Please refer to our issue policy for information on what types of issues we address.

    Please fill in this installation issue template to ensure a timely and thorough response.

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 20.04
    • MLflow installed from (source or binary): Source
    • MLflow version (run mlflow --version): mlflow, version 1.10.0
    • Python version: 3.7
    • Exact command to reproduce:

    I am launching the ml server on an ec2 with a postgres database and a s3 bucket, and local host 0.0.0.0

    Describe the problem

    Provide the exact sequence of commands / steps that you executed before running into the problem.

    When I create an experiment either on the command line or through the ui, it does not show up in the s3 at all. I am able to write to the s3 bucket through the aws cli.

    Other info / logs

    Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

    bug 
    opened by rd16395p 34
  • [BUG] Permissions issue when writing on /workspace

    [BUG] Permissions issue when writing on /workspace

    Willingness to contribute

    No. I cannot contribute a bug fix at this time.

    MLflow version

    1.27.0

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 22.04.1 LTS
    • Python version: 3.9 (Mambaforge)
    • yarn version, if running the dev UI: n/a

    Describe the problem

    After terminating an unfinished run (while writing and debugging code), I see the following error in Python 3.9. Only solution I found so far is to remove the mlruns/ directory as there is some permission issue with unfinished runs, it appears. However I cannot always do it as, e.g., I have another training in progress while wanting to debug code that is not always running/completing successfully.

    Is this a known issue with some known fix?

    Tracking information

    MLflow version: 1.27.0 Tracking URI: file:///home/local/dev/mlruns Artifact URI: file:///workspace/mlruns/0/5b986cb300594fe2b2730742acf6953a/artifacts

    Code to reproduce issue

    Cannot disclose code.

    Stack trace

      File "/home/local/mambaforge3/envs/tf_dev/lib/python3.9/site-packages/mlflow/tracking/fluent.py", line 846, in log_dict
        MlflowClient().log_dict(run_id, dictionary, artifact_file)
      File "/home/local/mambaforge3/envs/tf_dev/lib/python3.9/site-packages/mlflow/tracking/client.py", line 1094, in log_dict
        json.dump(dictionary, f, indent=2)
      File "/home/local/mambaforge3/envs/tf_dev/lib/python3.9/contextlib.py", line 126, in __exit__
        next(self.gen)
      File "/home/local/mambaforge3/envs/tf_dev/lib/python3.9/site-packages/mlflow/tracking/client.py", line 1020, in _log_artifact_helper
        self.log_artifact(run_id, tmp_path, artifact_dir)
      File "/home/local/mambaforge3/envs/tf_dev/lib/python3.9/site-packages/mlflow/tracking/client.py", line 955, in log_artifact
        self._tracking_client.log_artifact(run_id, local_path, artifact_path)
      File "/home/local/mambaforge3/envs/tf_dev/lib/python3.9/site-packages/mlflow/tracking/_tracking_service/client.py", line 365, in log_artifact
        artifact_repo.log_artifact(local_path, artifact_path)
      File "/home/local/mambaforge3/envs/tf_dev/lib/python3.9/site-packages/mlflow/store/artifact/local_artifact_repo.py", line 37, in log_artifact
        mkdir(artifact_dir)
      File "/home/local/mambaforge3/envs/tf_dev/lib/python3.9/site-packages/mlflow/utils/file_utils.py", line 119, in mkdir
        raise e
      File "/home/local/mambaforge3/envs/tf_dev/lib/python3.9/site-packages/mlflow/utils/file_utils.py", line 116, in mkdir
        os.makedirs(target)
      File "/home/local/mambaforge3/envs/tf_dev/lib/python3.9/os.py", line 215, in makedirs
        makedirs(head, exist_ok=exist_ok)
      File "/home/local/mambaforge3/envs/tf_dev/lib/python3.9/os.py", line 215, in makedirs
        makedirs(head, exist_ok=exist_ok)
      File "/home/local/mambaforge3/envs/tf_dev/lib/python3.9/os.py", line 215, in makedirs
        makedirs(head, exist_ok=exist_ok)
      [Previous line repeated 1 more time]
      File "/home/local/mambaforge3/envs/tf_dev/lib/python3.9/os.py", line 225, in makedirs
        mkdir(name, mode)
    PermissionError: [Errno 13] Permission denied: '/workspace'
    python-BaseException
    
    Process finished with exit code 130 (interrupted by signal 2: SIGINT)
    

    Other info / logs

    I run MLflow locally on a Linux desktop machine for debugging experimental code in Tensorflow. The issue appears when terminating abruptly previous runs.

    What component(s) does this bug affect?

    • [X] area/artifacts: Artifact stores and artifact logging
    • [ ] area/build: Build and test infrastructure for MLflow
    • [ ] area/docs: MLflow documentation pages
    • [ ] area/examples: Example code
    • [ ] area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
    • [ ] area/models: MLmodel format, model serialization/deserialization, flavors
    • [ ] area/pipelines: Pipelines, Pipeline APIs, Pipeline configs, Pipeline Templates
    • [ ] area/projects: MLproject format, project running backends
    • [ ] area/scoring: MLflow Model server, model deployment tools, Spark UDFs
    • [ ] area/server-infra: MLflow Tracking server backend
    • [X] area/tracking: Tracking Service, tracking client APIs, autologging

    What interface(s) does this bug affect?

    • [ ] area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
    • [ ] area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
    • [ ] area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
    • [ ] area/windows: Windows support

    What language(s) does this bug affect?

    • [ ] language/r: R APIs and clients
    • [ ] language/java: Java APIs and clients
    • [ ] language/new: Proposals for new client languages

    What integration(s) does this bug affect?

    • [ ] integrations/azure: Azure and Azure ML integrations
    • [ ] integrations/sagemaker: SageMaker integrations
    • [ ] integrations/databricks: Databricks integrations
    bug area/artifacts area/tracking 
    opened by VC86 33
  • Manage Experiments from MLflow UI (Create/Rename/Delete Experiments)

    Manage Experiments from MLflow UI (Create/Rename/Delete Experiments)

    What changes are proposed in this pull request?

    Manage Experiments from the UI:

    • Added an option to create experiments to the UI
    • Added an option to rename experiments to the UI
    • Added an option to delete experiments to the UI

    Instead of using the CLI, users can now manage experiments via the UI more easily. These features (among others) have also been requested in https://github.com/mlflow/mlflow/issues/1028.

    The following gif demonstrates the change:

    mlflow_manage_experiments_v2

    How is this patch tested?

    • Manual testing
    • Added basic unit tests to ExperimentListView.test.js and related forms

    Release Notes

    Is this a user-facing change?

    • [ ] No. You can skip the rest of this section.
    • [X] Yes. Give a description of this change to be included in the release notes for MLflow users.

    Enable users to create, delete and rename experiments from the UI.

    What component(s) does this PR affect?

    • [X] UI
    • [ ] CLI
    • [ ] API
    • [ ] REST-API
    • [ ] Examples
    • [ ] Docs
    • [ ] Tracking
    • [ ] Projects
    • [ ] Artifacts
    • [ ] Models
    • [ ] Scoring
    • [ ] Serving
    • [ ] R
    • [ ] Java
    • [ ] Python

    How should the PR be classified in the release notes? Choose one:

    • [ ] rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
    • [ ] rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
    • [X] rn/feature - A new user-facing feature worth mentioning in the release notes
    • [ ] rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
    • [ ] rn/documentation - A user-facing documentation change worth mentioning in the release notes
    rn/feature 
    opened by ggliem 32
  • Add run links popover to metrics plot

    Add run links popover to metrics plot

    What changes are proposed in this pull request?

    Resolves #2279

    How is this patch tested?

    (Details)

    Release Notes

    Is this a user-facing change?

    • [ ] No. You can skip the rest of this section.
    • [ ] Yes. Give a description of this change to be included in the release notes for MLflow users.

    (Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)

    What component(s) does this PR affect?

    • [ ] UI
    • [ ] CLI
    • [ ] API
    • [ ] REST-API
    • [ ] Examples
    • [ ] Docs
    • [ ] Tracking
    • [ ] Projects
    • [ ] Artifacts
    • [ ] Models
    • [ ] Scoring
    • [ ] Serving
    • [ ] R
    • [ ] Java
    • [ ] Python

    How should the PR be classified in the release notes? Choose one:

    • [ ] rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
    • [ ] rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
    • [ ] rn/feature - A new user-facing feature worth mentioning in the release notes
    • [ ] rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
    • [ ] rn/documentation - A user-facing documentation change worth mentioning in the release notes
    rn/feature 
    opened by harupy 32
  • Report SQLAlchemy OperationalError as a retryable HTTP error (503) not 400

    Report SQLAlchemy OperationalError as a retryable HTTP error (503) not 400

    Related Issues/PRs

    Fixes #7238

    What changes are proposed in this pull request?

    Created additional except block in mlflow/store/db/utils.py make_managed_session() function. Catches SQLAlchemy OperationalError (e.g. database down) and reports HTTP error 503 (TEMPORARILY_UNAVAILABLE). This triggers the standard MLflow client "exponential backoff" retry behavior, which avoids causing clients to unnecessarily fail (assuming the database issue is fixed soon).

    How is this patch tested?

    • [ ] Existing unit/integration tests
    • [x] New unit/integration tests
    • [X] Manual tests (describe details, including test results, below)

    I started this test script and left it running:

    import os
    import time
    import uuid
    
    import mlflow
    
    mlflow.set_experiment("foo")
    num_iterations = 180
    for i in range(num_iterations):
        print(f"Creating run {i} of {num_iterations}")
        with mlflow.start_run():
            time.sleep(1)
    

    I started the MLflow server as follows:

    PYTHONUNBUFFERED=1 mlflow server --backend-store-uri postgresql://mlflow:[email protected]/postgres \
        --default-artifact-root /tmp/ --host=0.0.0.0 --port=5007 --workers 1 \
         --gunicorn-opts '--timeout 0 --workers 1 --threads 1 --error-logfile - --log-file - --log-level debug'
    

    As the test script was running, I manually ran service postgresql stop and service postgresql start several times to confirm that:

    • The script did not fail.
    • The script "paused" while the database was down, then resumed once the database was up again.

    Previously, the test script failed immediately and exited with the stack backtrace provided in the linked issue (#7238).

    Does this PR change the documentation?

    • [X] No. You can skip the rest of this section.
    • [ ] Yes. Make sure the changed pages / sections render correctly in the documentation preview.

    Release Notes

    Is this a user-facing change?

    • [ ] No. You can skip the rest of this section.
    • [X] Yes. Give a description of this change to be included in the release notes for MLflow users.

    If the SQL database is down, MLflow REST API reports a retryable HTTP error 503 (TEMPORARILY_UNAVAILABLE), which will be automatically retried by the MLflow client library. Previously, this was reported as a non-retryable HTTP error (400).

    What component(s), interfaces, languages, and integrations does this PR affect?

    Components

    • [ ] area/artifacts: Artifact stores and artifact logging
    • [ ] area/build: Build and test infrastructure for MLflow
    • [ ] area/docs: MLflow documentation pages
    • [ ] area/examples: Example code
    • [ ] area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
    • [ ] area/models: MLmodel format, model serialization/deserialization, flavors
    • [ ] area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
    • [ ] area/projects: MLproject format, project running backends
    • [ ] area/scoring: MLflow Model server, model deployment tools, Spark UDFs
    • [X] area/server-infra: MLflow Tracking server backend
    • [ ] area/tracking: Tracking Service, tracking client APIs, autologging

    Interface

    • [ ] area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
    • [ ] area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
    • [X] area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
    • [ ] area/windows: Windows support

    Language

    • [ ] language/r: R APIs and clients
    • [ ] language/java: Java APIs and clients
    • [ ] language/new: Proposals for new client languages

    Integrations

    • [ ] integrations/azure: Azure and Azure ML integrations
    • [ ] integrations/sagemaker: SageMaker integrations
    • [ ] integrations/databricks: Databricks integrations

    How should the PR be classified in the release notes? Choose one:

    • [ ] rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
    • [ ] rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
    • [ ] rn/feature - A new user-facing feature worth mentioning in the release notes
    • [X] rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
    • [ ] rn/documentation - A user-facing documentation change worth mentioning in the release notes
    rn/bug-fix area/sqlalchemy area/server-infra 
    opened by barrywhart 31
  • MLFlow UI takes very long time to load experiments

    MLFlow UI takes very long time to load experiments

    System information

    • mlflow 1.0.0
    • python 3.6.8
    • mlflow server running in kubernetes EKS behind ELB
    • experiments stored on EBS volume
    • Current gunicorn configuration:
    [2019-06-27 17:32:42 +0000] [13] [DEBUG] Current configuration:
      config: None
      bind: ['0.0.0.0:5000']
      backlog: 2048
      workers: 4
      worker_class: gevent
      threads: 3
      worker_connections: 1000
      max_requests: 0
      max_requests_jitter: 0
      timeout: 300
      graceful_timeout: 30
      keepalive: 300
      limit_request_line: 4094
      limit_request_fields: 100
      limit_request_field_size: 8190
      reload: False
      reload_engine: auto
      reload_extra_files: []
      spew: False
      check_config: False
      preload_app: False
      sendfile: None
      reuse_port: False
      chdir: /
      daemon: False
      raw_env: []
      pidfile: None
      worker_tmp_dir: None
      user: 0
      group: 0
      umask: 0
      initgroups: False
      tmp_upload_dir: None
      secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
      forwarded_allow_ips: ['127.0.0.1']
      accesslog: None
      disable_redirect_access_to_syslog: False
      access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
      errorlog: -
      loglevel: DEBUG
      capture_output: False
      logger_class: gunicorn.glogging.Logger
      logconfig: None
      logconfig_dict: {}
      syslog_addr: udp://localhost:514
      syslog: False
      syslog_prefix: None
      syslog_facility: user
      enable_stdio_inheritance: False
      statsd_host: None
      statsd_prefix: 
      proc_name: None
      default_proc_name: mlflow.server:app
      pythonpath: None
      paste: None
      on_starting: <function OnStarting.on_starting at 0x7fc2cfbff158>
      on_reload: <function OnReload.on_reload at 0x7fc2cfbff268>
      when_ready: <function WhenReady.when_ready at 0x7fc2cfbff378>
      pre_fork: <function Prefork.pre_fork at 0x7fc2cfbff488>
      post_fork: <function Postfork.post_fork at 0x7fc2cfbff598>
      post_worker_init: <function PostWorkerInit.post_worker_init at 0x7fc2cfbff6a8>
      worker_int: <function WorkerInt.worker_int at 0x7fc2cfbff7b8>
      worker_abort: <function WorkerAbort.worker_abort at 0x7fc2cfbff8c8>
      pre_exec: <function PreExec.pre_exec at 0x7fc2cfbff9d8>
      pre_request: <function PreRequest.pre_request at 0x7fc2cfbffae8>
      post_request: <function PostRequest.post_request at 0x7fc2cfbffb70>
      child_exit: <function ChildExit.child_exit at 0x7fc2cfbffc80>
      worker_exit: <function WorkerExit.worker_exit at 0x7fc2cfbffd90>
      nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7fc2cfbffea0>
      on_exit: <function OnExit.on_exit at 0x7fc2cfc0d048>
      proxy_protocol: False
      proxy_allow_ips: ['127.0.0.1']
      keyfile: None
      certfile: None
      ssl_version: 2
      cert_reqs: 0
      ca_certs: None
      suppress_ragged_eofs: True
      do_handshake_on_connect: False
      ciphers: TLSv1
      raw_paste_global_conf: []
    

    Describe the problem

    We have some experiments with a lot of runs (over 400 runs) and the UI takes forever to load all the run. For instance to load 400 runs it takes around 2m15s. I had to bump the AWS ELB idle timeout to 300s to manage to get that page to load. I saw that the UI does an ajax call to load all runs at the same time.

    Would it be possible to have lazy loading of the runs instead of loading them all at the same time?

    area/uiux stale 
    opened by mbelang 31
  • Fix community flavor docs

    Fix community flavor docs

    Resolves #7625

    What changes are proposed in this pull request?

    (Please fill in changes proposed in this fix)

    How is this patch tested?

    • [ ] Existing unit/integration tests
    • [ ] New unit/integration tests
    • [ ] Manual tests (describe details, including test results, below)

    Does this PR change the documentation?

    • [ ] No. You can skip the rest of this section.
    • [x] Yes. Make sure the changed pages / sections render correctly in the documentation preview.

    Release Notes

    Is this a user-facing change?

    • [x] No. You can skip the rest of this section.
    • [ ] Yes. Give a description of this change to be included in the release notes for MLflow users.

    (Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)

    What component(s), interfaces, languages, and integrations does this PR affect?

    Components

    • [ ] area/artifacts: Artifact stores and artifact logging
    • [ ] area/build: Build and test infrastructure for MLflow
    • [ ] area/docs: MLflow documentation pages
    • [ ] area/examples: Example code
    • [ ] area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
    • [ ] area/models: MLmodel format, model serialization/deserialization, flavors
    • [ ] area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
    • [ ] area/projects: MLproject format, project running backends
    • [ ] area/scoring: MLflow Model server, model deployment tools, Spark UDFs
    • [ ] area/server-infra: MLflow Tracking server backend
    • [ ] area/tracking: Tracking Service, tracking client APIs, autologging

    Interface

    • [ ] area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
    • [ ] area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
    • [ ] area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
    • [ ] area/windows: Windows support

    Language

    • [ ] language/r: R APIs and clients
    • [ ] language/java: Java APIs and clients
    • [ ] language/new: Proposals for new client languages

    Integrations

    • [ ] integrations/azure: Azure and Azure ML integrations
    • [ ] integrations/sagemaker: SageMaker integrations
    • [ ] integrations/databricks: Databricks integrations

    How should the PR be classified in the release notes? Choose one:

    • [ ] rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
    • [x] rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
    • [ ] rn/feature - A new user-facing feature worth mentioning in the release notes
    • [ ] rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
    • [ ] rn/documentation - A user-facing documentation change worth mentioning in the release notes
    rn/none 
    opened by benjaminbluhm 1
  • [FR]

    [FR]

    Willingness to contribute

    No. I cannot contribute this feature at this time.

    Proposal Summary

    Can we bring back the list API for registered models?

    Motivation

    Make it easier to extract all the models in the registry

    Details

    No response

    What component(s) does this bug affect?

    • [ ] area/artifacts: Artifact stores and artifact logging
    • [ ] area/build: Build and test infrastructure for MLflow
    • [ ] area/docs: MLflow documentation pages
    • [ ] area/examples: Example code
    • [ ] area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
    • [ ] area/models: MLmodel format, model serialization/deserialization, flavors
    • [ ] area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
    • [ ] area/projects: MLproject format, project running backends
    • [ ] area/scoring: MLflow Model server, model deployment tools, Spark UDFs
    • [ ] area/server-infra: MLflow Tracking server backend
    • [X] area/tracking: Tracking Service, tracking client APIs, autologging

    What interface(s) does this bug affect?

    • [ ] area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
    • [ ] area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
    • [ ] area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
    • [ ] area/windows: Windows support

    What language(s) does this bug affect?

    • [ ] language/r: R APIs and clients
    • [ ] language/java: Java APIs and clients
    • [ ] language/new: Proposals for new client languages

    What integration(s) does this bug affect?

    • [ ] integrations/azure: Azure and Azure ML integrations
    • [ ] integrations/sagemaker: SageMaker integrations
    • [ ] integrations/databricks: Databricks integrations
    enhancement area/tracking 
    opened by isv66 0
  • Replace tabs in Makefile template with spaces

    Replace tabs in Makefile template with spaces

    Signed-off-by: harupy [email protected]

    Related Issues/PRs

    #7640

    What changes are proposed in this pull request?

    This replace tabs in Makefile template with spaces to fix #7640.

    How is this patch tested?

    • [x] Existing unit/integration tests
    • [ ] New unit/integration tests
    • [ ] Manual tests (describe details, including test results, below)

    Does this PR change the documentation?

    • [ ] No. You can skip the rest of this section.
    • [ ] Yes. Make sure the changed pages / sections render correctly in the documentation preview.

    Release Notes

    Is this a user-facing change?

    • [ ] No. You can skip the rest of this section.
    • [ ] Yes. Give a description of this change to be included in the release notes for MLflow users.

    (Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)

    What component(s), interfaces, languages, and integrations does this PR affect?

    Components

    • [ ] area/artifacts: Artifact stores and artifact logging
    • [ ] area/build: Build and test infrastructure for MLflow
    • [ ] area/docs: MLflow documentation pages
    • [ ] area/examples: Example code
    • [ ] area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
    • [ ] area/models: MLmodel format, model serialization/deserialization, flavors
    • [x] area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
    • [ ] area/projects: MLproject format, project running backends
    • [ ] area/scoring: MLflow Model server, model deployment tools, Spark UDFs
    • [ ] area/server-infra: MLflow Tracking server backend
    • [ ] area/tracking: Tracking Service, tracking client APIs, autologging

    Interface

    • [ ] area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
    • [ ] area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
    • [ ] area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
    • [ ] area/windows: Windows support

    Language

    • [ ] language/r: R APIs and clients
    • [ ] language/java: Java APIs and clients
    • [ ] language/new: Proposals for new client languages

    Integrations

    • [ ] integrations/azure: Azure and Azure ML integrations
    • [ ] integrations/sagemaker: SageMaker integrations
    • [ ] integrations/databricks: Databricks integrations

    How should the PR be classified in the release notes? Choose one:

    • [ ] rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
    • [x] rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
    • [ ] rn/feature - A new user-facing feature worth mentioning in the release notes
    • [ ] rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
    • [ ] rn/documentation - A user-facing documentation change worth mentioning in the release notes
    rn/none area/recipes 
    opened by harupy 1
  • [Recipes][Bug Fix] Correctly computed worst example df with predict probabilities

    [Recipes][Bug Fix] Correctly computed worst example df with predict probabilities

    What changes are proposed in this pull request?

    [Recipes][Bug Fix] Correctly computed worst example df with predict probabilities

    How is this patch tested?

    • [x] Verified by running a notebook
    image

    Does this PR change the documentation?

    • [x] No. You can skip the rest of this section.
    • [ ] Yes. Make sure the changed pages / sections render correctly in the documentation preview.

    Release Notes

    Is this a user-facing change?

    • [x] No. You can skip the rest of this section.
    • [ ] Yes. Give a description of this change to be included in the release notes for MLflow users.

    (Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)

    What component(s), interfaces, languages, and integrations does this PR affect?

    Components

    • [ ] area/artifacts: Artifact stores and artifact logging
    • [ ] area/build: Build and test infrastructure for MLflow
    • [ ] area/docs: MLflow documentation pages
    • [ ] area/examples: Example code
    • [ ] area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
    • [ ] area/models: MLmodel format, model serialization/deserialization, flavors
    • [x] area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
    • [ ] area/projects: MLproject format, project running backends
    • [ ] area/scoring: MLflow Model server, model deployment tools, Spark UDFs
    • [ ] area/server-infra: MLflow Tracking server backend
    • [ ] area/tracking: Tracking Service, tracking client APIs, autologging

    Interface

    • [ ] area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
    • [ ] area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
    • [ ] area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
    • [ ] area/windows: Windows support

    Language

    • [ ] language/r: R APIs and clients
    • [ ] language/java: Java APIs and clients
    • [ ] language/new: Proposals for new client languages

    Integrations

    • [ ] integrations/azure: Azure and Azure ML integrations
    • [ ] integrations/sagemaker: SageMaker integrations
    • [ ] integrations/databricks: Databricks integrations

    How should the PR be classified in the release notes? Choose one:

    • [ ] rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
    • [ ] rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
    • [ ] rn/feature - A new user-facing feature worth mentioning in the release notes
    • [x] rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
    • [ ] rn/documentation - A user-facing documentation change worth mentioning in the release notes
    rn/bug-fix area/recipes 
    opened by sunishsheth2009 1
  • Bump json5 from 1.0.1 to 1.0.2 in /mlflow/server/js

    Bump json5 from 1.0.1 to 1.0.2 in /mlflow/server/js

    Bumps json5 from 1.0.1 to 1.0.2.

    Release notes

    Sourced from json5's releases.

    v1.0.2

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295). This has been backported to v1. (#298)
    Changelog

    Sourced from json5's changelog.

    Unreleased [code, diff]

    v2.2.3 [code, diff]

    v2.2.2 [code, diff]

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).

    v2.2.1 [code, diff]

    • Fix: Removed dependence on minimist to patch CVE-2021-44906. (#266)

    v2.2.0 [code, diff]

    • New: Accurate and documented TypeScript declarations are now included. There is no need to install @types/json5. (#236, #244)

    v2.1.3 [code, diff]

    • Fix: An out of memory bug when parsing numbers has been fixed. (#228, #229)

    v2.1.2 [code, diff]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies javascript 
    opened by dependabot[bot] 1
  • Bump gitpython from 3.1.29 to 3.1.30 in /.devcontainer

    Bump gitpython from 3.1.29 to 3.1.30 in /.devcontainer

    Bumps gitpython from 3.1.29 to 3.1.30.

    Release notes

    Sourced from gitpython's releases.

    v3.1.30 - with important security fixes

    See gitpython-developers/GitPython#1515 for details.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies python 
    opened by dependabot[bot] 1
Releases(v2.1.1)
  • v2.1.1(Dec 26, 2022)

    MLflow 2.1.1 is a patch release containing the following bug fixes:

    • [Scoring] Fix mlflow.pyfunc.spark_udf() type casting error on model with ColSpec input schema and make PyFuncModel.predict support dataframe with elements of numpy.ndarray type (#7592 @WeichenXu123)
    • [Scoring] Make mlflow.pyfunc.scoring_server.client.ScoringServerClient support input dataframe with elements of numpy.ndarray type (#7594 @WeichenXu123)
    • [Tracking] Ensure mlflow imports ML packages lazily (#7597, @harupy)
    Source code(tar.gz)
    Source code(zip)
  • v2.1.0(Dec 21, 2022)

    MLflow 2.1.0 includes several major features and improvements

    Features:

    • [Recipes] Introduce support for multi-class classification (#7458, @mshtelma)
    • [Recipes] Extend the pyfunc representation of classification models to output scores in addition to labels (#7474, @sunishsheth2009)
    • [UI] Add user ID and lifecycle stage quick search links to the Runs page (#7462, @jaeday)
    • [Tracking] Paginate the GetMetricHistory API (#7523, #7415, @BenWilson2)
    • [Tracking] Add Runs search aliases for Run name and start time that correspond to UI column names (#7492, @apurva-koti)
    • [Tracking] Add a /version endpoint to mlflow server for querying the server's MLflow version (#7273, @joncarter1)
    • [Model Registry] Add FileStore support for the Model Registry (#6605, @serena-ruan)
    • [Model Registry] Introduce an mlflow.search_registered_models() fluent API (#7428, @TSienki)
    • [Model Registry / Java] Add a getRegisteredModel() method to the Java client (#6602) (#7511, @drod331)
    • [Model Registry / R] Add an mlflow_set_model_version_tag() method to the R client (#7401, @leeweijie)
    • [Models] Introduce a metadata field to the MLmodel specification and log_model() methods (#7237, @jdonzallaz)
    • [Models] Extend Model.load() to support loading MLmodel specifications from remote locations (#7517, @dbczumar)
    • [Models] Pin the major version of MLflow in Models' requirements.txt and conda.yaml files (#7364, @BenWilson2)
    • [Scoring] Extend mlflow.pyfunc.spark_udf() to support StructType results (#7527, @WeichenXu123)
    • [Scoring] Extend TensorFlow and Keras Models to support multi-dimensional inputs with mlflow.pyfunc.spark_udf()(#7531, #7291, @WeichenXu123)
    • [Scoring] Support specifying deployment environment variables and tags when deploying models to SageMaker (#7433, @jhallard)

    Bug fixes:

    • [Recipes] Fix a bug that prevented use of custom early_stop functions during model tuning (#7538, @sunishsheth2009)
    • [Recipes] Fix a bug in the logic used to create a Spark session during data ingestion (#7307, @WeichenXu123)
    • [Tracking] Make the metric names produced by mlflow.autolog() consistent with mlflow.evaluate() (#7418, @wenfeiy-db)
    • [Tracking] Fix an autologging bug that caused nested, redundant information to be logged for XGBoost and LightGBM models (#7404, @WeichenXu123)
    • [Tracking] Correctly classify SQLAlchemy OperationalErrors as retryable HTTP errors (#7240, @barrywhart)
    • [Artifacts] Correctly handle special characters in credentials when using FTP artifact storage (#7479, @HCTsai)
    • [Models] Address an issue that prevented MLeap models from being saved on Windows (#6966, @dbczumar)
    • [Scoring] Fix a permissions issue encountered when using NFS during model scoring with mlflow.pyfunc.spark_udf() (#7427, @WeichenXu123)

    Documentation updates:

    • [Docs] Add more examples to the Runs search documentation page (#7487, @apurva-koti)
    • [Docs] Add documentation for Model flavors developed by the community (#7425, @mmerce)
    • [Docs] Add an example for logging and scoring ONNX Models (#7398, @Rusteam)
    • [Docs] Fix a typo in the model scoring REST API example for inputs with the dataframe_split format (#7540, @zhouyangyu)
    • [Docs] Fix a typo in the model scoring REST API example for inputs with the dataframe_records format (#7361, @dbczumar)

    Small bug fixes and documentation updates:

    #7571, #7543, #7529, #7435, #7399, @WeichenXu123; #7568, @xiaoye-hua; #7549, #7557, #7509, #7498, #7499, #7485, #7486, #7484, #7391, #7388, #7390, #7381, #7366, #7348, #7346, #7334, #7340, #7323, @BenWilson2; #7561, #7562, #7560, #7553, #7546, #7539, #7544, #7542, #7541, #7533, #7507, #7470, #7469, #7467, #7466, #7464, #7453, #7449, #7450, #7440, #7430, #7436, #7429, #7426, #7410, #7406, #7409, #7407, #7405, #7396, #7393, #7395, #7384, #7376, #7379, #7375, #7354, #7353, #7351, #7352, #7350, #7345, #6493, #7343, #7344, @harupy; #7494, @dependabot[bot]; #7526, @tobycheese; #7489, @liangz1; #7534, @Jingnan-Jia; #7496, @danielhstahl; #7504, #7503, #7459, #7454, #7447, @tsugumi-sys; #7461, @wkrt7; #7451, #7414, #7372, #7289, @sunishsheth2009; #7441, @ikrizanic; #7432, @Pochingto; #7386, @jhallard; #7370, #7373, #7371, #7336, #7341, #7342, @dbczumar; #7335, @prithvikannan

    Source code(tar.gz)
    Source code(zip)
  • v2.0.1(Nov 15, 2022)

    The 2.0.1 version of MLflow is a major milestone release that focuses on simplifying the management of end-to-end MLOps workflows, providing new feature-rich functionality, and expanding upon the production-ready MLOps capabilities offered by MLflow. This release contains several important breaking changes from the 1.x API, additional major features and improvements.

    Features:

    • [Recipes] MLflow Pipelines is now MLflow Recipes - a framework that enables data scientists to quickly develop high-quality models and deploy them to production
    • [Recipes] Add support for classification models to MLflow Recipes (#7082, @bbarnes52)
    • [UI] Introduce support for pinning runs within the experiments UI (#7177, @harupy)
    • [UI] Simplify the layout and provide customized displays of metrics, parameters, and tags within the experiments UI (#7177, @harupy)
    • [UI] Simplify run filtering and ordering of runs within the experiments UI (#7177, @harupy)
    • [Tracking] Update mlflow.pyfunc.get_model_dependencies() to download all referenced requirements files for specified models (#6733, @harupy)
    • [Tracking] Add support for selecting the Keras model save_format used by mlflow.tensorflow.autolog() (#7123, @balvisio)
    • [Models] Set mlflow.evaluate() status to stable as it is now a production-ready API
    • [Models] Simplify APIs for specifying custom metrics and custom artifacts during model evaluation with mlflow.evaluate() (#7142, @harupy)
    • [Models] Correctly infer the positive label for binary classification within mlflow.evaluate() (#7149, @dbczumar)
    • [Models] Enable automated signature logging for tensorflow and keras models when mlflow.tensorflow.autolog() is enabled (#6678, @BenWilson2)
    • [Models] Add support for native Keras and Tensorflow Core models within mlflow.tensorflow (#6530, @WeichenXu123)
    • [Models] Add support for defining the model_format used by mlflow.xgboost.save/log_model() (#7068, @AvikantSrivastava)
    • [Scoring] Overhaul the model scoring REST API to introduce format indicators for inputs and support multiple output fields (#6575, @tomasatdatabricks; #7254, @adriangonz)
    • [Scoring] Add support for ragged arrays in model signatures (#7135, @trangevi)
    • [Java] Add getModelVersion API to the java client (#6955, @wgottschalk)

    Breaking Changes:

    The following list of breaking changes are arranged by their order of significance within each category.

    • [Core] Support for Python 3.7 has been dropped. MLflow now requires Python >=3.8
    • [Recipes] mlflow.pipelines APIs have been replaced with mlflow.recipes
    • [Tracking / Registry] Remove /preview routes for Tracking and Model Registry REST APIs (#6667, @harupy)
    • [Tracking] Remove deprecated list APIs for experiments, models, and runs from Python, Java, R, and REST APIs (#6785, #6786, #6787, #6788, #6800, #6868, @dbczumar)
    • [Tracking] Remove deprecated runs response field from Get Experiment REST API response (#6541, #6524 @dbczumar)
    • [Tracking] Remove deprecated MlflowClient.download_artifacts API (#6537, @WeichenXu123)
    • [Tracking] Change the behavior of environment variable handling for MLFLOW_EXPERIMENT_NAME such that the value is always used when creating an experiment (#6674, @BenWilson2)
    • [Tracking] Update mlflow server to run in --serve-artifacts mode by default (#6502, @harupy)
    • [Tracking] Update Experiment ID generation for the Filestore backend to enable threadsafe concurrency (#7070, @BenWilson2)
    • [Tracking] Remove dataset_name and on_data_{name | hash} suffixes from mlflow.evaluate() metric keys (#7042, @harupy)
    • [Models / Scoring / Projects] Change default environment manager to virtualenv instead of conda for model inference and project execution (#6459, #6489 @harupy)
    • [Models] Move Keras model logging APIs to the mlflow.tensorflow flavor and drop support for TensorFlow Estimators (#6530, @WeichenXu123)
    • [Models] Remove deprecated mlflow.sklearn.eval_and_log_metrics() API in favor of mlflow.evaluate() API (#6520, @dbczumar)
    • [Models] Require mlflow.evaluate() model inputs to be specified as URIs (#6670, @harupy)
    • [Models] Drop support for returning custom metrics and artifacts from the same function when using mlflow.evaluate(), in favor of custom_artifacts (#7142, @harupy)
    • [Models] Extend PyFuncModel spec to support conda and virtualenv subfields (#6684, @harupy)
    • [Scoring] Remove support for defining input formats using the Content-Type header (#6575, @tomasatdatabricks; #7254, @adriangonz)
    • [Scoring] Replace the --no-conda CLI option argument for native serving with --env-manager='local' (#6501, @harupy)
    • [Scoring] Remove public APIs for mlflow.sagemaker.deploy() and mlflow.sagemaker.delete() in favor of MLflow deployments APIs, such as mlflow deployments -t sagemaker (#6650, @dbczumar)
    • [Scoring] Rename input argument df to inputs in mlflow.deployments.predict() method (#6681, @BenWilson2)
    • [Projects] Replace the use_conda argument with the env_manager argument within the run CLI command for MLflow Projects (#6654, @harupy)
    • [Projects] Modify the MLflow Projects docker image build options by renaming --skip-image-build to --build-image with a default of False (#7011, @harupy)
    • [Integrations/Azure] Remove deprecated mlflow.azureml modules from MLflow in favor of the azure-mlflow deployment plugin (#6691, @BenWilson2)
    • [R] Remove conda integration with the R client (#6638, @harupy)

    Bug fixes:

    • [Recipes] Fix rendering issue with profile cards polyfill (#7154, @hubertzub-db)
    • [Tracking] Set the MLflow Run name correctly when specified as part of the tags argument to mlflow.start_run() (#7228, @Cokral)
    • [Tracking] Fix an issue with conflicting MLflow Run name assignment if the mlflow.runName tag is set (#7138, @harupy)
    • [Scoring] Fix incorrect payload constructor error in SageMaker deployment client predict() API (#7193, @dbczumar)
    • [Scoring] Fix an issue where DataCaptureConfig information was not preserved when updating a Sagemaker deployment (#7281, @harupy)

    Small bug fixes and documentation updates:

    #7309, #7314, #7288, #7276, #7244, #7207, #7175, #7107, @sunishsheth2009; #7261, #7313, #7311, #7249, #7278, #7260, #7284, #7283, #7263, #7266, #7264, #7267, #7265, #7250, #7259, #7247, #7242, #7143, #7214, #7226, #7230, #7227, #7229, #7225, #7224, #7223, #7210, #7192, #7197, #7196, #7204, #7198, #7191, #7189, #7184, #7182, #7170, #7183, #7131, #7165, #7151, #7164, #7168, #7150, #7128, #7028, #7118, #7117, #7102, #7072, #7103, #7101, #7100, #7099, #7098, #7041, #7040, #6978, #6768, #6719, #6669, #6658, #6656, #6655, #6538, #6507, #6504 @harupy; #7310, #7308, #7300, #7290, #7239, #7220, #7127, #7091, #6713 @BenWilson2; #7299, #7271, #7209, #7180, #7179, #7158, #7147, #7114, @prithvikannan; #7275, #7245, #7134, #7059, @jinzhang21; #7306, #7298, #7287, #7272, #7258, #7236, @ayushthe1; #7279, @tk1012; #7219, @rddefauw; #7218, #7208, #7188, #7190, #7176, #7137, #7136, #7130, #7124, #7079, #7052, #6541 @dbczumar; #6640, @WeichenXu123; #7200, @hubertzub-db; #7121, @Gonmeso; #6988, @alonisser; #7141, @pdifranc; #7086, @jerrylian-db; #7286, @shogohida

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0rc0(Nov 1, 2022)

  • v1.30.0(Oct 20, 2022)

    We are happy to announce the availability of MLflow 1.30.0!

    MLflow 1.30.0 includes several major features and improvements

    Features:

    • [Pipelines] Introduce hyperparameter tuning support to MLflow Pipelines (#6859, @prithvikannan)
    • [Pipelines] Introduce support for prediction outlier comparison to training data set (#6991, @jinzhang21)
    • [Pipelines] Introduce support for recording all training parameters for reproducibility (#7026, #7094, @prithvikannan)
    • [Pipelines] Add support for Delta tables as a datasource in the ingest step (#7010, @sunishsheth2009)
    • [Pipelines] Add expanded support for data profiling up to 10,000 columns (#7035, @prithvikanna)
    • [Pipelines] Add support for AutoML in MLflow Pipelines using FLAML (#6959, @mshtelma)
    • [Pipelines] Add support for simplified transform step execution by allowing for unspecified configuration (#6909, @apurva-koti)
    • [Pipelines] Introduce a data preview tab to the transform step card (#7033, @prithvikannan)
    • [Tracking] Introduce run_name attribute for create_run, get_run and update_run APIs (#6782, #6798 @apurva-koti)
    • [Tracking] Add support for searching by creation_time and last_update_time for the search_experiments API (#6979, @harupy)
    • [Tracking] Add support for search terms run_id IN and run ID NOT IN for the search_runs API (#6945, @harupy)
    • [Tracking] Add support for searching by user_id and end_time for the search_runs API (#6881, #6880 @subramaniam02)
    • [Tracking] Add support for searching by run_name and run_id for the search_runs API (#6899, @harupy; #6952, @alexacole)
    • [Tracking] Add support for synchronizing run name attribute and mlflow.runName tag (#6971, @BenWilson2)
    • [Tracking] Add support for signed tracking server requests using AWSSigv4 and AWS IAM (#7044, @pdifranc)
    • [Tracking] Introduce the update_run() API for modifying the status and name attributes of existing runs (#7013, @gabrielfu)
    • [Tracking] Add support for experiment deletion in the mlflow gc cli API (#6977, @shaikmoeed)
    • [Models] Add support for environment restoration in the evaluate() API (#6728, @jerrylian-db)
    • [Models] Remove restrictions on binary classification labels in the evaluate() API (#7077, @dbczumar)
    • [Scoring] Add support for BooleanType to mlflow.pyfunc.spark_udf() (#6913, @BenWilson2)
    • [SQLAlchemy] Add support for configurable Pool class options for SqlAlchemyStore (#6883, @mingyu89)

    Bug fixes:

    • [Pipelines] Enable Pipeline subprocess commands to create a new SparkSession if one does not exist (#6846, @prithvikannan)
    • [Pipelines] Fix a rendering issue with bool column types in Step Card data profiles (#6907, @sunishsheth2009)
    • [Pipelines] Add validation and an exception if required step files are missing (#7067, @mingyu89)
    • [Pipelines] Change step configuration validation to only be performed during runtime execution of a step (#6967, @prithvikannan)
    • [Tracking] Fix infinite recursion bug when inferring the model schema in mlflow.pyspark.ml.autolog() (#6831, @harupy)
    • [UI] Remove the browser error notification when failing to fetch artifacts (#7001, @kevingreer)
    • [Models] Allow mlflow-skinny package to serve as base requirement in MLmodel requirements (#6974, @BenWilson2)
    • [Models] Fix an issue with code path resolution for loading SparkML models (#6968, @dbczumar)
    • [Models] Fix an issue with dependency inference in logging SparkML models (#6912, @BenWilson2)
    • [Models] Fix an issue involving potential duplicate downloads for SparkML models (#6903, @serena-ruan)
    • [Models] Add missing pos_label to sklearn.metrics.precision_recall_curve in mlflow.evaluate() (#6854, @dbczumar)
    • [SQLAlchemy] Fix a bug in SqlAlchemyStore where set_tag() updates the incorrect tags (#7027, @gabrielfu)

    Documentation updates:

    • [Models] Update details regarding the default Keras serialization format (#7022, @balvisio)

    Small bug fixes and documentation updates:

    #7093, #7095, #7092, #7064, #7049, #6921, #6920, #6940, #6926, #6923, #6862, @jerrylian-db; #6946, #6954, #6938, @mingyu89; #7047, #7087, #7056, #6936, #6925, #6892, #6860, #6828, @sunishsheth2009; #7061, #7058, #7098, #7071, #7073, #7057, #7038, #7029, #6918, #6993, #6944, #6976, #6960, #6933, #6943, #6941, #6900, #6901, #6898, #6890, #6888, #6886, #6887, #6885, #6884, #6849, #6835, #6834, @harupy; #7094, #7065, #7053, #7026, #7034, #7021, #7020, #6999, #6998, #6996, #6990, #6989, #6934, #6924, #6896, #6895, #6876, #6875, #6861, @prithvikannan; #7081, #7030, #7031, #6965, #6750, @bbarnes52; #7080, #7069, #7051, #7039, #7012, #7004, @dbczumar; #7054, @jinzhang21; #7055, #7037, #7036, #6949, #6951, @apurva-koti; #6815, @michaguenther; #6897, @chaturvedakash; #7025, #6981, #6950, #6948, #6937, #6829, #6830, @BenWilson2; #6982, @vadim; #6985, #6927, @kriscon-db; #6917, #6919, #6872, #6855, @WeichenXu123; #6980, @utkarsh867; #6973, #6935, @wentinghu; #6930, @mingyangge-db; #6956, @RohanBha1; #6916, @av-maslov; #6824, @shrinath-suresh; #6732, @oojo12; #6807, @ikrizanic; #7066, @subramaniam20jan; #7043, @AvikantSrivastava; #6879, @jspablo

    Source code(tar.gz)
    Source code(zip)
  • v1.29.0(Sep 19, 2022)

    We are happy to announce the availability of MLflow 1.29.0!

    MLflow 1.29.0 includes several major features and improvements

    Features:

    [Pipelines] Improve performance and fidelity of dataset profiling in the scikit-learn regression Pipeline (#6792, @sunishsheth2009) [Pipelines] Add an mlflow pipelines get-artifact CLI for retrieving Pipeline artifacts (#6517, @prithvikannan) [Pipelines] Introduce an option for skipping dataset profiling to the scikit-learn regression Pipeline (#6456, @apurva-koti) [Pipelines / UI] Display an mlflow pipelines CLI command for reproducing a Pipeline run in the MLflow UI (#6376, @hubertzub-db) [Tracking] Automatically generate friendly names for Runs if not supplied by the user (#6736, @BenWilson2) [Tracking] Add load_text(), load_image() and load_dict() fluent APIs for convenient artifact loading (#6475, @subramaniam02) [Tracking] Add creation_time and last_update_time attributes to the Experiment class (#6756, @subramaniam02) [Tracking] Add official MLflow Tracking Server Dockerfiles to the MLflow repository (#6731, @oojo12) [Tracking] Add searchExperiments API to Java client and deprecate listExperiments (#6561, @dbczumar) [Tracking] Add mlflow_search_experiments API to R client and deprecate mlflow_list_experiments (#6576, @dbczumar) [UI] Make URLs clickable in the MLflow Tracking UI (#6526, @marijncv) [UI] Introduce support for csv data preview within the artifact viewer pane (#6567, @nnethery) [Model Registry / Models] Introduce mlflow.models.add_libraries_to_model() API for adding libraries to an MLflow Model (#6586, @arjundc-db) [Models] Add model validation support to mlflow.evaluate() (#6582, @zhe-db, @jerrylian-db) [Models] Introduce sample_weights support to mlflow.evaluate() (#6806, @dbczumar) [Models] Add pos_label support to mlflow.evaluate() for identifying the positive class (#6696, @harupy) [Models] Make the metric name prefix and dataset info configurable in mlflow.evaluate() (#6593, @dbczumar) [Models] Add utility for validating the compatibility of a dataset with a model signature (#6494, @serena-ruan) [Models] Add predict_proba() support to the pyfunc representation of scikit-learn models (#6631, @skylarbpayne) [Models] Add support for Decimal type inference to MLflow Model schemas (#6600, @shitaoli-db) [Models] Add new CLI command for generating Dockerfiles for model serving (#6591, @anuarkaliyev23) [Scoring] Add /health endpoint to scoring server (#6574, @gabriel-milan) [Scoring] Support specifying a variant_name during Sagemaker deployment (#6486, @nfarley-soaren) [Scoring] Support specifying a data_capture_config during SageMaker deployment (#6423, @jonwiggins)

    Bug fixes:

    [Tracking] Make Run and Experiment deletion and restoration idempotent (#6641, @dbczumar) [UI] Fix an alignment bug affecting the Experiments list in the MLflow UI (#6569, @sunishsheth2009) [Models] Fix a regression in the directory path structure of logged Spark Models that occurred in MLflow 1.28.0 (#6683, @gwy1995) [Models] No longer reload the main module when loading model code (#6647, @Jooakim) [Artifacts] Fix an mlflow server compatibility issue with HDFS when running in --serve-artifacts mode (#6482, @shidianshifen) [Scoring] Fix an inference failure with 1-dimensional tensor inputs in TensorFlow and Keras (#6796, @LiamConnell)

    Documentation updates:

    [Tracking] Mark the SearchExperiments API as stable (#6551, @dbczumar) [Tracking / Model Registry] Deprecate the ListExperiments, ListRegisteredModels, and list_run_infos() APIs (#6550, @dbczumar) [Scoring] Deprecate mlflow.sagemaker.deploy() in favor of SageMakerDeploymentClient.create() (#6651, @dbczumar) Small bug fixes and documentation updates:

    #6803, #6804, #6801, #6791, #6772, #6745, #6762, #6760, #6761, #6741, #6725, #6720, #6666, #6708, #6717, #6704, #6711, #6710, #6706, #6699, #6700, #6702, #6701, #6685, #6664, #6644, #6653, #6629, #6639, #6624, #6565, #6558, #6557, #6552, #6549, #6534, #6533, #6516, #6514, #6506, #6509, #6505, #6492, #6490, #6478, #6481, #6464, #6463, #6460, #6461, @harupy; #6810, #6809, #6727, #6648, @BenWilson2; #6808, #6766, #6729, @jerrylian-db; #6781, #6694, @marijncv; #6580, #6661, @bbarnes52; #6778, #6687, #6623, @shraddhafalane; #6662, #6737, #6612, #6595, @sunishsheth2009; #6777, @aviralsharma07; #6665, #6743, #6573, @liangz1; #6784, @apurva-koti; #6753, #6751, @mingyu89; #6690, #6455, #6484, @kriscon-db; #6465, #6689, @hubertzub-db; #6721, @WeichenXu123; #6722, #6718, #6668, #6663, #6621, #6547, #6508, #6474, #6452, @dbczumar; #6555, #6584, #6543, #6542, #6521, @dsgibbons; #6634, #6596, #6563, #6495, @prithvikannan; #6571, @smurching; #6630, #6483, @serena-ruan; #6642, @thinkall; #6614, #6597, @jinzhang21; #6457, @cnphil; #6570, #6559, @kumaryogesh17; #6560, #6540, @iamthen0ise; #6544, @Monkero; #6438, @ahlag; #3292, @dolfinus; #6637, @ninabacc-db; #6632, @arpitjasa-db

    Source code(tar.gz)
    Source code(zip)
  • v1.28.0(Aug 11, 2022)

    MLflow 1.28.0 includes several major features and improvements:

    Features:

    • [Pipelines] Log the full Pipeline runtime configuration to MLflow Tracking during Pipeline execution (#6359, @jinzhang21)
    • [Pipelines] Add pipeline.yaml configurations to specify the Model Registry backend used for model registration (#6284, @sunishsheth2009)
    • [Pipelines] Support optionally skipping the transform step of the scikit-learn regression pipeline (#6362, @sunishsheth2009)
    • [Pipelines] Add UI links to Runs and Models in Pipeline Step Cards on Databricks (#6294, @dbczumar)
    • [Tracking] Introduce mlflow.search_experiments() API for searching experiments by name and by tags (#6333, @WeichenXu123; #6227, #6172, #6154, @harupy)
    • [Tracking] Increase the maximum parameter value length supported by File and SQL backends to 500 characters (#6358, @johnyNJ)
    • [Tracking] Introduce an --older-than flag to mlflow gc for removing runs based on deletion time (#6354, @Jason-CKY)
    • [Tracking] Add MLFLOW_SQLALCHEMYSTORE_POOL_RECYCLE environment variable for recycling SQLAlchemy connections (#6344, @postrational)
    • [UI] Display deeply nested runs in the Runs Table on the Experiment Page (#6065, @tospe)
    • [UI] Add box plot visualization for metrics to the Compare Runs page (#6308, @ahlag)
    • [UI] Display tags on the Compare Runs page (#6164, @CaioCavalcanti)
    • [UI] Use scientific notation for axes when viewing metric plots in log scale (#6176, @RajezMariner)
    • [UI] Add button to Metrics page for downloading metrics as CSV (#6048, @rafaelvp-db)
    • [UI] Include NaN and +/- infinity values in plots on the Metrics page (#6422, @hubertzub-db)
    • [Tracking / Model Registry] Introduce environment variables to control retry behavior and timeouts for REST API requests (#5745, @peterdhansen)
    • [Tracking / Model Registry] Make MlflowClient importable as mlflow.MlflowClient (#6085, @subramaniam02)
    • [Model Registry] Add support for searching registered models and model versions by tags (#6413, #6411, #6320, @WeichenXu123)
    • [Model Registry] Add stage parameter to set_model_version_tag() (#6185, @subramaniam02)
    • [Model Registry] Add --registry-store-uri flag to mlflow server for specifying the Model Registry backend URI (#6142, @Secbone)
    • [Models] Improve performance of Spark Model logging on Databricks (#6282, @bbarnes52)
    • [Models] Include Pandas Series names in inferred model schemas (#6361, @RynoXLI)
    • [Scoring] Make model_uri optional in mlflow models build-docker to support building generic model serving images (#6302, @harupy)
    • [R] Support logging of NA and NaN parameter values (#6263, @nathaneastwood)

    Bug fixes and documentation updates:

    • [Pipelines] Improve scikit-learn regression pipeline latency by limiting dataset profiling to the first 100 columns (#6297, @sunishsheth2009)
    • [Pipelines] Use xdg-open instead of open for viewing Pipeline results on Linux systems (#6326, @strangiato)
    • [Pipelines] Fix a bug that skipped Step Card rendering in Jupyter Notebooks (#6378, @apurva-koti)
    • [Tracking] Use the 401 HTTP response code in authorization failure REST API responses, instead of 500 (#6106, @balvisio)
    • [Tracking] Correctly classify artifacts as files and directories when using Azure Blob Storage (#6237, @nerdinand)
    • [Tracking] Fix a bug in the File backend that caused run metadata to be lost in the event of a failed write (#6388, @dbczumar)
    • [Tracking] Adjust mlflow.pyspark.ml.autolog() to only log model signatures for supported input / output data types (#6365, @harupy)
    • [Tracking] Adjust mlflow.tensorflow.autolog() to log TensorFlow early stopping callback info when log_models=False is specified (#6170, @WeichenXu123)
    • [Tracking] Fix signature and input example logging errors in mlflow.sklearn.autolog() for models containing transformers (#6230, @dbczumar)
    • [Tracking] Fix a failure in mlflow gc that occurred when removing a run whose artifacts had been previously deleted (#6165, @dbczumar)
    • [Tracking] Add missing sqlparse library to MLflow Skinny client, which is required for search support (#6174, @dbczumar)
    • [Tracking / Model Registry] Fix an mlflow server bug that rejected parameters and tags with empty string values (#6179, @dbczumar)
    • [Model Registry] Fix a failure preventing model version schemas from being downloaded with --serve-arifacts enabled (#6355, @abbas123456)
    • [Scoring] Patch the Java Model Server to support MLflow Models logged on recent versions of the Databricks Runtime (#6337, @dbczumar)
    • [Scoring] Verify that either the deployment name or endpoint is specified when invoking the mlflow deployments predict CLI (#6323, @dbczumar)
    • [Scoring] Properly encode datetime columns when performing batch inference with mlflow.pyfunc.spark_udf() (#6244, @harupy)
    • [Projects] Fix an issue where local directory paths were misclassified as Git URIs when running Projects (#6218, @ElefHead)
    • [R] Fix metric logging behavior for +/- infinity values (#6271, @nathaneastwood)
    • [Docs] Move Python API docs for MlflowClient from mlflow.tracking to mlflow.client (#6405, @dbczumar)
    • [Docs] Document that MLflow Pipelines requires Make (#6216, @dbczumar)
    • [Docs] Improve documentation for developing and testing MLflow JS changes in CONTRIBUTING.rst (#6330, @ahlag)

    Small bug fixes and doc updates (#6322, #6321, #6213, @KarthikKothareddy; #6409, #6408, #6396, #6402, #6399, #6398, #6397, #6390, #6381, #6386, #6385, #6373, #6375, #6380, #6374, #6372, #6363, #6353, #6352, #6350, #6351, #6349, #6347, #6287, #6341, #6342, #6340, #6338, #6319, #6314, #6316, #6317, #6318, #6315, #6313, #6311, #6300, #6292, #6291, #6289, #6290, #6278, #6279, #6276, #6272, #6252, #6243, #6250, #6242, #6241, #6240, #6224, #6220, #6208, #6219, #6207, #6171, #6206, #6199, #6196, #6191, #6190, #6175, #6167, #6161, #6160, #6153, @harupy; #6193, @jwgwalton; #6304, #6239, #6234, #6229, @sunishsheth2009; #6258, @xanderwebs; #6106, @balvisio; #6303, @bbarnes52; #6117, @wenfeiy-db; #6389, #6214, @apurva-koti; #6412, #6420, #6277, #6266, #6260, #6148, @WeichenXu123; #6120, @ameya-parab; #6281, @nathaneastwood; #6426, #6415, #6417, #6418, #6257, #6182, #6157, @dbczumar; #6189, @shrinath-suresh; #6309, @SamirPS; #5897, @temporaer; #6251, @herrmann; #6198, @sniafas; #6368, #6158, @jinzhang21; #6236, @subramaniam02; #6036, @serena-ruan; #6430, @ninabacc-db)

    Note: Version 1.28.0 of the MLflow R package has not yet been released. It will be available on CRAN within the next week.

    Source code(tar.gz)
    Source code(zip)
  • v1.27.0(Jun 29, 2022)

    MLflow 1.27.0 includes several major features and improvements:

    • [Pipelines] With MLflow 1.27.0, we are excited to announce the release of MLflow Pipelines, an opinionated framework for structuring MLOps workflows that simplifies and standardizes machine learning application development and productionization. MLflow Pipelines makes it easy for data scientists to follow best practices for creating production-ready ML deliverables, allowing them to focus on developing excellent models. MLflow Pipelines also enables ML engineers and DevOps teams to seamlessly deploy models to production and incorporate them into applications. To get started with MLflow Pipelines, check out the docs at https://mlflow.org/docs/latest/pipelines.html. (#6115)

    • [UI] Introduce UI support for searching and comparing runs across multiple Experiments (#5971, @r3stl355)

    More features:

    • [Tracking] When using batch logging APIs, automatically split large sets of metrics, tags, and params into multiple requests (#6052, @nzw0301)
    • [Tracking] When an Experiment is deleted, SQL-based backends also move the associate Runs to the "deleted" lifecycle stage (#6064, @AdityaIyengar27)
    • [Tracking] Add support for logging single-element ndarray and tensor instances as metrics via the mlflow.log_metric() API (#5756, @ntakouris)
    • [Models] Add support for CatBoostRanker models to the mlflow.catboost flavor (#6032, @danielgafni)
    • [Models] Integrate SHAP's KernelExplainer with mlflow.evaluate(), enabling model explanations on categorical data (#6044, #5920, @WeichenXu123)
    • [Models] Extend mlflow.evaluate() to automatically log the score() outputs of scikit-learn models as metrics (#5935, #5903, @WeichenXu123)

    Bug fixes and documentation updates:

    • [UI] Fix broken model links in the Runs table on the MLflow Experiment Page (#6014, @hctpbl)
    • [Tracking/Installation] Require sqlalchemy>=1.4.0 upon MLflow installation, which is necessary for usage of SQL-based MLflow Tracking backends (#6024, @sniafas)
    • [Tracking] Fix a regression that caused mlflow server to reject LogParam API requests containing empty string values (#6031, @harupy)
    • [Tracking] Fix a failure in scikit-learn autologging that occurred when matplotlib was not installed on the host system (#5995, @fa9r)
    • [Tracking] Fix a failure in TensorFlow autologging that occurred when training models on tf.data.Dataset inputs (#6061, @dbczumar)
    • [Artifacts] Address artifact download failures from SFTP locations that occurred due to mismanaged concurrency (#5840, @rsundqvist)
    • [Models] Fix a bug where MLflow Models did not restore bundled code properly if multiple models use the same code module name (#5926, @BFAnas)
    • [Models] Address an issue where mlflow.sklearn.model() did not properly restore bundled model code (#6037, @WeichenXu123)
    • [Models] Fix a bug in mlflow.evaluate() that caused input data objects to be mutated when evaluating certain scikit-learn models (#6141, @dbczumar)
    • [Models] Fix a failure in mlflow.pyfunc.spark_udf that occurred when the UDF was invoked on an empty RDD partition (#6063, @WeichenXu123)
    • [Models] Fix a failure in mlflow models build-docker that occurred when env-manager=local was specified (#6046, @bneijt)
    • [Projects] Improve robustness of the git repository check that occurs prior to MLflow Project execution (#6000, @dkapur17)
    • [Projects] Address a failure that arose when running a Project that does not have a master branch (#5889, @harupy)
    • [Docs] Correct several typos throughout the MLflow docs (#5959, @ryanrussell)

    Small bug fixes and doc updates (#6041, @drsantos89; #6138, #6137, #6132, @sunishsheth2009; #6144, #6124, #6125, #6123, #6057, #6060, #6050, #6038, #6029, #6030, #6025, #6018, #6019, #5962, #5974, #5972, #5957, #5947, #5907, #5938, #5906, #5932, #5919, #5914, #5888, #5890, #5886, #5873, #5865, #5843, @harupy; #6113, @comojin1994; #5930, @yashaswikakumanu; #5837, @shrinath-suresh; #6067, @deepyaman; #5997, @idlefella; #6021, @BenWilson2; #5984, @Sumanth077; #5929, @krunal16-c; #5879, @kugland; #5875, @ognis1205; #6006, @ryanrussell; #6140, @jinzhang21; #5983, @elk15; #6022, @apurva-koti; #5982, @EB-Joel; #5981, #5980, @punitkashyup; #6103, @ikrizanic; #5988, #5969, @SaumyaBhushan; #6020, #5991, @WeichenXu123; #5910, #5912, @Dark-Knight11; #6005, @Asinsa; #6023, @subramaniam02; #5999, @Regis-Caelum; #6007, @CaioCavalcanti; #5943, @kvaithin; #6017, #6002, @NeoKish; #6111, @T1b4lt; #5986, @seyyidibrahimgulec; #6053, @Zohair-coder; #6146, #6145, #6143, #6139, #6134, #6136, #6135, #6133, #6071, #6070, @dbczumar; #6026, @rotate2050)

    Source code(tar.gz)
    Source code(zip)
  • v1.26.1(May 28, 2022)

    MLflow 1.26.1 is a patch release containing the following bug fixes:

    • [Installation] Fix compatibility issue with protobuf >= 4.21.0 (#5945, @harupy)
    • [Models] Fix get_model_dependencies behavior for models: URIs containing artifact paths (#5921, @harupy)
    • [Models] Revert a problematic change to artifacts persistence in mlflow.pyfunc.log_model() that was introduced in MLflow 1.25.0 (#5891, @kyle-jarvis)
    • [Models] Close associated image files when EvaluationArtifact outputs from mlflow.evaluate() are garbage collected (#5900, @WeichenXu123)

    Small bug fixes and updates (#5874, #5942, #5941, #5940, #5938, @harupy; #5893, @PrajwalBorkar; #5909, @yashaswikakumanu; #5937, @BenWilson2)

    Source code(tar.gz)
    Source code(zip)
  • v1.26.0(May 16, 2022)

    MLflow 1.26.0 includes several major features and improvements:

    Features:

    • [CLI] Add endpoint naming and options configuration to the deployment CLI (#5731, @trangevi)
    • [Build,Doc] Add development environment setup script for Linux and MacOS x86 Operating Systems (#5717, @BenWilson2)
    • [Tracking] Update mlflow.set_tracking_uri to add support for paths defined as pathlib.Path in addition to existing str path declarations (#5824, @cacharle)
    • [Scoring] Add custom timeout override option to the scoring server CLI to support high latency models (#5663, @sniafas)
    • [UI] Add sticky header to experiment run list table to support column name visibility when scrolling beyond page fold (#5818, @hubertzub-db)
    • [Artifacts] Add GCS support for MLflow garbage collection (#5811, @aditya-iyengar-rtl-de)
    • [Evaluate] Add pos_label argument for eval_and_log_metrics API to support accurate binary classifier evaluation metrics (#5807, @yxiong)
    • [UI] Add fields for latest, minimum and maximum metric values on metric display page (#5574, @adamreeve)
    • [Models] Add support for input_example and signature logging for pyspark ml flavor when using autologging (#5719, @bali0019)
    • [Models] Add virtualenv environment manager support for mlflow models docker-build CLI (#5728, @harupy)
    • [Models] Add support for wildcard module matching in log_model_allowlist for PySpark models (#5723, @serena-ruan)
    • [Projects] Add virtualenv environment manager support for MLflow projects (#5631, @harupy)
    • [Models] Add virtualenv environment manager support for MLflow Models (#5380, @harupy)
    • [Models] Add virtualenv environment manager support for mlflow.pyfunc.spark_udf (#5676, @WeichenXu123)
    • [Models] Add support for input_example and signature logging for tensorflow flavor when using autologging (#5510, @bali0019)
    • [Server-infra] Add JSON Schema Type Validation to enable raising 400 errors on malformed requests to REST API endpoints (#5458, @mrkaye97)
    • [Scoring] Introduce abstract endpoint interface for mlflow deployments (#5378, @trangevi)
    • [UI] Add End Time and Duration fields to run comparison page (#3378, @RealArpanBhattacharya)
    • [Serving] Add schema validation support when parsing input csv data for model serving (#5531, @vvijay-bolt)

    Bug fixes and documentation updates:

    • [Models] Fix REPL ID propagation from datasource listener to publisher for Spark data sources (#5826, @dbczumar)
    • [UI] Update ag-grid and implement getRowId to improve performance in the runs table visualization (#5725, @adamreeve)
    • [Serving] Fix tf-serving parsing to support columnar-based formatting (#5825, @arjundc-db)
    • [Artifacts] Update log_artifact to support models larger than 2GB in HDFS (#5812, @hitchhicker)
    • [Models] Fix autologging to support lightgbm metric names with "@" symbols within their names (#5785, @mengchendd)
    • [Models] Pyfunc: Fix code directory resolution of subdirectories (#5806, @dbczumar)
    • [Server-Infra] Fix mlflow-R server starting failure on windows (#5767, @serena-ruan)
    • [Docs] Add documentation for virtualenv environment manager support for MLflow projects (#5727, @harupy)
    • [UI] Fix artifacts display sizing to support full width rendering in preview pane (#5606, @szczeles)
    • [Models] Fix local hostname issues when loading spark model by binding driver address to localhost (#5753, @WeichenXu123)
    • [Models] Fix autologging validation and batch_size calculations for tensorflow flavor (#5683, @MarkYHZhang)
    • [Artifacts] Fix SqlAlchemyStore.log_batch implementation to make it log data in batches (#5460, @erensahin)

    Small bug fixes and doc updates (#5858, #5859, #5853, #5854, #5845, #5829, #5842, #5834, #5795, #5777, #5794, #5766, #5778, #5765, #5763, #5768, #5769, #5760, #5727, #5748, #5726, #5721, #5711, #5710, #5708, #5703, #5702, #5696, #5695, #5669, #5670, #5668, #5661, #5638, @harupy; #5749, @arpitjasa-db; #5675, @Davidswinkels; #5803, #5797, @ahlag; #5743, @kzhang01; #5650, #5805, #5724, #5720, #5662, @BenWilson2; #5627, @cterrelljones; #5646, @kutal10; #5758, @davideli-db; #5810, @rahulporuri; #5816, #5764, @shrinath-suresh; #5869, #5715, #5737, #5752, #5677, #5636, @WeichenXu123; #5735, @subramaniam02; #5746, @akaigraham; #5734, #5685, @lucalves; #5761, @marcelatoffernet; #5707, @aashish-khub; #5808, @ketangangal; #5730, #5700, @shaikmoeed; #5775, @dbczumar; #5747, @zhixuanevelynwu)

    Note: Version 1.26.0 of the MLflow R package has not yet been released. It will be available on CRAN within the next week.

    Source code(tar.gz)
    Source code(zip)
  • v1.25.1(Apr 13, 2022)

    MLflow 1.25.1 is a patch release containing the following bug fixes:

    • [Models] Fix a pyfunc artifact overwrite bug when multiple artifacts are saved in sub-directories (#5657, @kyle-jarvis)
    • [Scoring] Fix permissions issue for Spark workers accessing model artifacts from a temp directory created by the driver (#5684, @WeichenXu123)

    Note: Version 1.25.1 of the MLflow R package has not yet been released. It will be available on CRAN within the next week.

    Source code(tar.gz)
    Source code(zip)
  • v1.25.0(Apr 11, 2022)

    MLflow 1.25.0 includes several major features and improvements:

    Features:

    • [Tracking] Introduce a new fluent API mlflow.last_active_run() that provides the most recent fluent active run (#5584, @MarkYHZhang)
    • [Tracking] Add experiment_names argument to the mlflow.search_runs() API to support searching runs by experiment names (#5564, @r3stl355)
    • [Tracking] Add a description parameter to mlflow.start_run() (#5534, @dogeplusplus)
    • [Tracking] Add log_every_n_step parameter to mlflow.pytorch.autolog() to control metric logging frequency (#5516, @adamreeve)
    • [Tracking] Log pyspark.ml.param.Params values as MLflow parameters during PySpark autologging (#5481, @serena-ruan)
    • [Tracking] Add support for pyspark.ml.Transformers to PySpark autologging (#5466, @serena-ruan)
    • [Tracking] Add input example and signature autologging for Keras models (#5461, @bali0019)
    • [Models] Introduce mlflow.diviner flavor for large-scale time series forecasting (#5553, @BenWilson2)
    • [Models] Add pyfunc.get_model_dependencies() API to retrieve reproducible environment specifications for MLflow Models with the pyfunc flavor (#5503, @WeichenXu123)
    • [Models] Add code_paths argument to all model flavors to support packaging custom module code with MLflow Models (#5448, @stevenchen-db)
    • [Models] Support creating custom artifacts when evaluating models with mlflow.evaluate() (#5405, #5476 @MarkYHZhang)
    • [Models] Add mlflow_version field to MLModel specification (#5515, #5576, @r3stl355)
    • [Models] Add support for logging models to preexisting destination directories (#5572, @akshaya-a)
    • [Scoring / Projects] Introduce --env-manager configuration for specifying environment restoration tools (e.g. conda) and deprecate --no-conda (#5567, @harupy)
    • [Scoring] Support restoring model dependencies in mlflow.pyfunc.spark_udf() to ensure accurate predictions (#5487, #5561, @WeichenXu123)
    • [Scoring] Add support for numpy.ndarray type inputs to the TensorFlow pyfunc predict() function (#5545, @WeichenXu123)
    • [Scoring] Support deployment of MLflow Models to Sagemaker Serverless (#5610, @matthewmayo)
    • [UI] Add MLflow version to header beneath logo (#5504, @adamreeve)
    • [Artifacts] Introduce a mlflow.artifacts.download_artifacts() API mirroring the functionality of the mlflow artifacts download CLI (#5585, @dbczumar)
    • [Artifacts] Introduce environment variables for controlling GCS artifact upload/download chunk size and timeouts (#5438, #5483, @mokrueger)

    Bug fixes and documentation updates:

    • [Tracking/SQLAlchemy] Create an index on run_uuid for PostgreSQL to improve query performance (#5446, @harupy)
    • [Tracking] Remove client-side validation of metric, param, tag, and experiment fields (#5593, @BenWilson2)
    • [Projects] Support setting the name of the MLflow Run when executing an MLflow Project (#5187, @bramrodenburg)
    • [Scoring] Use pandas split orientation for DataFrame inputs to SageMaker deployment predict() API to preserve column ordering (#5522, @dbczumar)
    • [Server-Infra] Fix runs search compatibility bugs with PostgreSQL, MySQL, and MSSQL (#5540, @harupy)
    • [CLI] Fix a bug in the mlflow-skinny client that caused mlflow --version to fail (#5573, @BenWilson2)
    • [Docs] Update guidance and examples for model deployment to AzureML to recommend using the mlflow-azureml package (#5491, @santiagxf)

    Small bug fixes and doc updates (#5591, #5629, #5597, #5592, #5562, #5477, @BenWilson2; #5554, @juntai-zheng; #5570, @tahesse; #5605, @guelate; #5633, #5632, #5625, #5623, #5615, #5608, #5600, #5603, #5602, #5596, #5587, #5586, #5580, #5577, #5568, #5290, #5556, #5560, #5557, #5548, #5547, #5538, #5513, #5505, #5464, #5495, #5488, #5485, #5468, #5455, #5453, #5454, #5452, #5445, #5431, @harupy; #5640, @nchittela; #5520, #5422, @Ark-kun; #5639, #5604, @nishipy; #5543, #5532, #5447, #5435, @WeichenXu123; #5502, @singankit; #5500, @Sohamkayal4103; #5449, #5442, @apurva-koti; #5552, @vinijaiswal; #5511, @adamreeve; #5428, @jinzhang21; #5309, @sunishsheth2009; #5581, #5559, @Kr4is; #5626, #5618, #5529, @sisp; #5652, #5624, #5622, #5613, #5509, #5459, #5437, @dbczumar; #5616, @liangz1)

    Source code(tar.gz)
    Source code(zip)
  • v1.24.0(Feb 28, 2022)

    MLflow 1.24.0 includes several major features and improvements:

    Features:

    • [Tracking] Support uploading, downloading, and listing artifacts through the MLflow server via mlflow server --serve-artifacts (#5320, @BenWilson2, @harupy)
    • [Tracking] Add the registered_model_name argument to mlflow.autolog() for automatic model registration during autologging (#5395, @WeichenXu123)
    • [UI] Improve and restructure the Compare Runs page. Additions include "show diff only" toggles and scrollable tables (#5306, @WeichenXu123)
    • [Models] Introduce mlflow.pmdarima flavor for pmdarima models (#5373, @BenWilson2)
    • [Models] When loading an MLflow Model, print a warning if a mismatch is detected between the current environment and the Model's dependencies (#5368, @WeichenXu123)
    • [Models] Support computing custom scalar metrics during model evaluation with mlflow.evaluate() (#5389, @MarkYHZhang)
    • [Scoring] Add support for deploying and evaluating SageMaker models via the MLflow Deployments API (#4971, #5396, @jamestran201)

    Bug fixes and documentation updates:

    • [Tracking / UI] Fix artifact listing and download failures that occurred when operating the MLflow server in --serve-artifacts mode (#5409, @dbczumar)
    • [Tracking] Support environment-variable-based authentication when making artifact requests to the MLflow server in --serve-artifacts mode (#5370, @TimNooren)
    • [Tracking] Fix bugs in hostname and path resolution when making artifacts requests to the MLflow server in --serve-artifacts mode (#5384, #5385, @mert-kirpici)
    • [Tracking] Fix an import error that occurred when mlflow.log_figure() was used without matplotlib.figure imported (#5406, @WeichenXu123)
    • [Tracking] Correctly log XGBoost metrics containing the @ symbol during autologging (#5403, @maxfriedrich)
    • [Tracking] Fix a SQL Server database error that occurred during Runs search (#5382, @dianacarvalho1)
    • [Tracking] When downloading artifacts from HDFS, store them in the user-specified destination directory (#5210, @DimaClaudiu)
    • [Tracking / Model Registry] Improve performance of large artifact and model downloads (#5359, @mehtayogita)
    • [Models] Fix fast.ai PyFunc inference behavior for models with 2D outputs (#5411, @santiagxf)
    • [Models] Record Spark model information to the active run when mlflow.spark.log_model() is called (#5355, @szczeles)
    • [Models] Restore onnxruntime execution providers when loading ONNX models with mlflow.pyfunc.load_model() (#5317, @ecm200)
    • [Projects] Increase Docker image push timeout when using Projects with Docker (#5363, @zanitete)
    • [Python] Fix a bug that prevented users from enabling DEBUG-level Python log outputs (#5362, @dbczumar)
    • [Docs] Add a developer guide explaining how to build custom plugins for mlflow.evaluate() (#5333, @WeichenXu123)

    Small bug fixes and doc updates (#5298, @wamartin-aml; #5399, #5321, #5313, #5307, #5305, #5268, #5284, @harupy; #5329, @Ark-kun; #5375, #5346, #5304, @dbczumar; #5401, #5366, #5345, @BenWilson2; #5326, #5315, @WeichenXu123; #5236, @singankit; #5302, @timvink; #5357, @maitre-matt; #5347, #5344, @mehtayogita; #5367, @apurva-koti; #5348, #5328, #5310, @liangz1; #5267, @sunishsheth2009)

    Note: Version 1.24.0 of the MLflow R package has not yet been released. It will be available on CRAN within the next week.

    Source code(tar.gz)
    Source code(zip)
  • v1.23.1(Jan 27, 2022)

    MLflow 1.23.1 is a patch release containing the following bug fixes:

    • [Models] Fix a directory creation failure when loading PySpark ML models (#5299, @arjundc-db)
    • [Model Registry] Revert to using case-insensitive validation logic for stage names in models:/ URIs (#5312, @lichenran1234)
    • [Projects] Fix a race condition during Project tar file creation (#5303, @dbczumar)

    Note: Version 1.23.1 of the MLflow R package has not yet been released. It will be available on CRAN within the next week.

    Source code(tar.gz)
    Source code(zip)
  • v1.23.0(Jan 17, 2022)

    MLflow 1.23.0 includes several major features and improvements:

    Note: Version 1.23.0 of the MLflow R package has not yet been released. It will be available on CRAN within the next week.

    Features:

    • [Models] Introduce an mlflow.evaluate() API for evaluating MLflow Models, providing performance and explainability insights. For an overview, see https://mlflow.org/docs/latest/models.html#model-evaluation (#5069, #5092, #5256, @WeichenXu123)
    • [Models] log_model() APIs now return information about the logged MLflow Model, including artifact location, flavors, and schema (#5230, @liangz1)
    • [Models] Introduce an mlflow.models.Model.load_input_example() Python API for loading MLflow Model input examples (#5212, @maitre-matt)
    • [Models] Add a UUID field to the MLflow Model specification. MLflow Models now have a unique identifier (#5149, #5167, @WeichenXu123)
    • [Models] Support passing SciPy CSC and CSR matrices as MLflow Model input examples (#5016, @WeichenXu123)
    • [Model Registry] Support specifying latest in model URI to get the latest version of a model regardless of the stage (#5027, @lichenran1234)
    • [Tracking] Add support for LightGBM scikit-learn models to mlflow.lightgbm.autolog() (#5130, #5200, #5271 @jwyyy)
    • [Tracking] Improve S3 artifact download speed by caching boto clients (#4695, @Samreay)
    • [UI] Automatically update metric plots for in-progress runs (#5017, @cedkoffeto, @harupy)

    Bug fixes and documentation updates:

    • [Models] Fix a bug in MLflow Model schema enforcement where strings were incorrectly cast to Pandas objects (#5134, @stevenchen-db)
    • [Models] Fix a bug where keyword arguments passed to mlflow.pytorch.load_model() were not applied for scripted models (#5163, @schmidt-jake)
    • [Model Registry][r] Fix bug in R client mlflow_create_model_version() API that caused model source to be set incorrectly (#5185, @bramrodenburg)
    • [Projects] Fix parsing behavior for Project URIs containing quotes (#5117, @dinaldoap)
    • [Scoring] Use the correct 400-level error code for malformed MLflow Model Server requests (#5003, @abatomunkuev)
    • [Tracking] Fix a bug where mlflow.start_run() modified user-supplied tags dictionary (#5191, @matheusMoreno)
    • [UI] Fix a bug causing redundant scroll bars to be displayed on the Experiment Page (#5159, @sunishsheth2009)

    Small bug fixes and doc updates (#5275, #5264, #5244, #5249, #5255, #5248, #5243, #5240, #5239, #5232, #5234, #5235, #5082, #5220, #5219, #5226, #5217, #5194, #5188, #5132, #5182, #5183, #5180, #5177, #5165, #5164, #5162, #5015, #5136, #5065, #5125, #5106, #5127, #5120, @harupy; #5045, @BenWilson2; #5156, @pbezglasny; #5202, @jwyyy; #3863, @JoshuaAnickat; #5205, @abhiramr; #4604, @OSobky; #4256, @einsmein; #5140, @AveshCSingh; #5273, #5186, #5176, @WeichenXu123; #5260, #5229, #5206, #5174, #5160, @liangz1)

    Source code(tar.gz)
    Source code(zip)
  • v1.22.0(Nov 30, 2021)

    1.22.0 (2021-11-29)

    MLflow 1.22.0 includes several major features and improvements:

    Features:

    • [UI] Add a share button to the Experiment page (#4936, @marijncv)
    • [UI] Improve readability of column sorting dropdown on Experiment page (#5022, @WeichenXu123; #5018, @NieuweNils, @coder-freestyle)
    • [Tracking] Mark all autologging integrations as stable by removing @experimental decorators (#5028, @liangz1)
    • [Tracking] Add optional experiment_id parameter to mlflow.set_experiment() (#5012, @dbczumar)
    • [Tracking] Add support for XGBoost scikit-learn models to mlflow.xgboost.autolog() (#5078, @jwyyy)
    • [Tracking] Improve statsmodels autologging performance by removing unnecessary metrics (#4942, @WeichenXu123)
    • [Tracking] Update R client to tag nested runs with parent run ID (#4197, @yitao-li)
    • [Models] Support saving and loading all XGBoost model types (#4954, @jwyyy)
    • [Scoring] Support specifying AWS account and role when deploying models to SageMaker (#4923, @andresionek91)
    • [Scoring] Support serving MLflow models with MLServer (#4963, @adriangonz)

    Bug fixes and documentation updates:

    • [UI] Fix bug causing Metric Plot page to crash when metric values are too large (#4947, @ianshan0915)
    • [UI] Fix bug causing parallel coordinate curves to vanish (#5087, @harupy)
    • [UI] Remove Creator field from Model Version page if user information is absent (#5089, @jinzhang21)
    • [UI] Fix model loading instructions for non-pyfunc models in Artifact Viewer (#5006, @harupy)
    • [Models] Fix a bug that added mlflow to conda.yaml even if a hashed version was already present (#5058, @maitre-matt)
    • [Docs] Add Python documentation for metric, parameter, and tag key / value length limits (#4991, @westford14)
    • [Examples] Update Python version used in Prophet example to fix installation errors (#5101, @BenWilson2)
    • [Examples] Fix Kubernetes resources specification in MLflow Projects + Kubernetes example (#4948, @jianyuan)

    Small bug fixes and doc updates (#5119, #5107, #5105, #5103, #5085, #5088, #5051, #5081, #5039, #5073, #5072, #5066, #5064, #5063, #5060, #4718, #5053, #5052, #5041, #5043, #5047, #5036, #5037, #5029, #5031, #5032, #5030, #5007, #5019, #5014, #5008, #4998, #4985, #4984, #4970, #4966, #4980, #4967, #4978, #4979, #4968, #4976, #4975, #4934, #4956, #4938, #4950, #4946, #4939, #4913, #4940, #4935, @harupy; #5095, #5070, #5002, #4958, #4945, @BenWilson2; #5099, @chaosddp; #5005, @you-n-g; #5042, #4952, @shrinath-suresh; #4962, #4995, @WeichenXu123; #5010, @lichenran1234; #5000, @wentinghu; #5111, @alexott; #5102, #5024, #5011, #4959, @dbczumar; #5075, #5044, #5026, #4997, #4964, #4989, @liangz1; #4999, @stevenchen-db)

    Source code(tar.gz)
    Source code(zip)
  • v1.21.0(Oct 25, 2021)

    MLflow 1.21.0 includes several major features and improvements:

    Features:

    • [UI] Add a diff-only toggle to the runs table for filtering out columns with constant values (#4862, @marijncv)
    • [UI] Add a duration column to the runs table (#4840, @marijncv)
    • [UI] Display the default column sorting order in the runs table (#4847, @marijncv)
    • [UI] Add start_time and duration information to exported runs CSV (#4851, @marijncv)
    • [UI] Add lifecycle stage information to the run page (#4848, @marijncv)
    • [UI] Collapse run page sections by default for space efficiency, limit artifact previews to 50MB (#4917, @dbczumar)
    • [Tracking] Introduce autologging capabilities for PaddlePaddle model training (#4751, @jinminhao)
    • [Tracking] Add an optional tags field to the CreateExperiment API (#4788, @dbczumar; #4795, @apurva-koti)
    • [Tracking] Add support for deleting artifacts from SFTP stores via the mlflow gc CLI (#4670, @afaul)
    • [Tracking] Support AzureDefaultCredential for authenticating with Azure artifact storage backends (#4002, @marijncv)
    • [Models] Upgrade the fastai model flavor to support fastai V2 (>=2.4.1) (#4715, @jinzhang21)
    • [Models] Introduce an mlflow.prophet model flavor for Prophet time series models (#4773, @BenWilson2)
    • [Models] Introduce a CLI for publishing MLflow Models to the SageMaker Model Registry (#4669, @jinnig)
    • [Models] Print a warning when inferred model dependencies are not available on PyPI (#4891, @dbczumar)
    • [Models, Projects] Add MLFLOW_CONDA_CREATE_ENV_CMD for customizing Conda environment creation (#4746, @giacomov)

    Bug fixes and documentation updates:

    • [UI] Fix an issue where column selections made in the runs table were persisted across experiments (#4926, @sunishsheth2009)
    • [UI] Fix an issue where the text null was displayed in the runs table column ordering dropdown (#4924, @harupy)
    • [UI] Fix a bug causing the metric plot view to display NaN values upon click (#4858, @arpitjasa-db)
    • [Tracking] Fix a model load failure for paths containing spaces or special characters on UNIX systems (#4890, @BenWilson2)
    • [Tracking] Correct a migration issue that impacted usage of MLflow Tracking with SQL Server (#4880, @marijncv)
    • [Tracking] Spark datasource autologging tags now respect the maximum allowable size for MLflow Tracking (#4809, @dbczumar)
    • [Model Registry] Add previously-missing certificate sources for Model Registry REST API requests (#4731, @ericgosno91)
    • [Model Registry] Throw an exception when users supply invalid Model Registry URIs for Databricks (#4877, @yunpark93)
    • [Scoring] Fix a schema enforcement error that incorrectly cast date-like strings to datetime objects (#4902, @wentinghu)
    • [Docs] Expand the documentation for the MLflow Skinny Client (#4113, @eedeleon)

    Small bug fixes and doc updates (#4928, #4919, #4927, #4922, #4914, #4899, #4893, #4894, #4884, #4864, #4823, #4841, #4817, #4796, #4797, #4767, #4768, #4757, @harupy; #4863, #4838, @marijncv; #4834, @ksaur; #4772, @louisguitton; #4801, @twsl; #4929, #4887, #4856, #4843, #4789, #4780, @WeichenXu123; #4769, @Ark-kun; #4898, #4756, @apurva-koti; #4784, @lakshikaparihar; #4855, @ianshan0915; #4790, @eedeleon; #4931, #4857, #4846, 4777, #4748, @dbczumar)

    Source code(tar.gz)
    Source code(zip)
  • v1.20.2(Sep 4, 2021)

    MLflow 1.20.2 is a patch release containing the following features and bug fixes:

    Features:

    • Enabled auto dependency inference in spark flavor in autologging (#4759, @harupy)

    Bug fixes and documentation updates:

    • Increased MLflow client HTTP request timeout from 10s to 120s (#4764, @jinzhang21)
    • Fixed autologging compatibility bugs with TensorFlow and Keras version 2.6.0 (#4766, @dbczumar)

    Small bug fixes and doc updates (#4770, @WeichenXu123)

    Source code(tar.gz)
    Source code(zip)
  • v1.20.1(Aug 26, 2021)

    Note: The MLflow R package for 1.20.1 is not yet available but will be in a week because CRAN's submission system will be offline until September 1st.

    MLflow 1.20.1 is a patch release for the MLflow Python and R packages containing the following bug fixes:

    • Avoid calling importlib_metadata.packages_distributions upon mlflow.utils.requirements_utils import (#4741, @dbczumar)
    • Avoid depending on importlib_metadata==4.7.0 (#4740, @dbczumar)
    Source code(tar.gz)
    Source code(zip)
  • v1.20.0(Aug 26, 2021)

    Note: The MLflow R package for 1.20.0 is not yet available but will be in a week because CRAN's submission system will be offline until September 1st.

    MLflow 1.20.0 includes several major features and improvements:

    Features:

    • Autologging for scikit-learn now records post training metrics when scikit-learn evaluation APIs, such as sklearn.metrics.mean_squared_error, are called (#4491, #4628 #4638, @WeichenXu123)
    • Autologging for PySpark ML now records post training metrics when model evaluation APIs, such as Evaluator.evaluate(), are called (#4686, @WeichenXu123)
    • Add pip_requirements and extra_pip_requirements to mlflow.*.log_model and mlflow.*.save_model for directly specifying the pip requirements of the model to log / save (#4519, #4577, #4602, @harupy)
    • Added stdMetrics entries to the training metrics recorded during PySpark CrossValidator autologging (#4672, @WeichenXu123)
    • MLflow UI updates:
      1. Improved scalability of the parallel coordinates plot for run performance comparison,
      2. Added support for filtering runs based on their start time on the experiment page,
      3. Added a dropdown for runs table column sorting on the experiment page,
      4. Upgraded the AG Grid plugin, which is used for runs table loading on the experiment page, to version 25.0.0,
      5. Fixed a bug on the experiment page that caused the metrics section of the runs table to collapse when selecting columns from other table sections (#4712, @dbczumar)
    • Added support for distributed execution to autologging for PyTorch Lightning (#4717, @dbczumar)
    • Expanded R support for Model Registry functionality (#4527, @bramrodenburg)
    • Added model scoring server support for defining custom prediction response wrappers (#4611, @Ark-kun)
    • mlflow.*.log_model and mlflow.*.save_model now automatically infer the pip requirements of the model to log / save based on the current software environment (#4518, @harupy)
    • Introduced support for running Sagemaker Batch Transform jobs with MLflow Models (#4410, #4589, @YQ-Wang)

    Bug fixes and documentation updates:

    • Deprecate requirements_file argument for mlflow.*.save_model and mlflow.*.log_model (#4620, @harupy)
    • set nextPageToken to null (#4729, @harupy)
    • Fix a bug in MLflow UI where the pagination token for run search is not refreshed when switching experiments (#4709, @harupy)
    • Fix a bug in the model scoring server that rejected requests specifying a valid Content-Type header with the charset parameter (#4609, @Ark-kun)
    • Fixed a bug that caused SQLAlchemy backends to exhaust DB connections. (#4663, @arpitjasa-db)
    • Improve docker build procedures to raise exceptions if docker builds fail (#4610, @Ark-kun)
    • Disable autologging for scikit-learn cross_val_* APIs, which are incompatible with autologging (#4590, @WeichenXu123)
    • Deprecate MLflow Models support for fast.ai V1 (#4728, @dbczumar)
    • Deprecate the old Azure ML deployment APIs mlflow.azureml.cli.build_image and mlflow.azureml.build_image (#4646, @trangevi)
    • Deprecate MLflow Models support for TensorFlow < 2.0 and Keras < 2.3 (#4716, @harupy)

    Small bug fixes and doc updates (#4730, #4722, #4725, #4723, #4703, #4710, #4679, #4694, #4707, #4708, #4706, #4705, #4625, #4701, #4700, #4662, #4699, #4682, #4691, #4684, #4683, #4675, #4666, #4648, #4653, #4651, #4641, #4649, #4627, #4637, #4632, #4634, #4621, #4619, #4622, #4460, #4608, #4605, #4599, #4600, #4581, #4583, #4565, #4575, #4564, #4580, #4572, #4570, #4574, #4576, #4568, #4559, #4537, #4542, @harupy; #4698, #4573, @Ark-kun; #4674, @kvmakes; #4555, @vagoston; #4644, @zhengjxu; #4690, #4588, @apurva-koti; #4545, #4631, #4734, @WeichenXu123; #4633, #4292, @shrinath-suresh; #4711, @jinzhang21; #4688, @murilommen; #4635, @ryan-duve; #4724, #4719, #4640, #4639, #4629, #4612, #4613, #4586, @dbczumar)

    Source code(tar.gz)
    Source code(zip)
  • v1.19.0(Jul 14, 2021)

    MLflow 1.19.0 includes several major features and improvements:

    Features:

    • Add support for plotting per-class feature importance computed on linear boosters in XGBoost autologging (#4523, @dbczumar)

    • Add mlflow_create_registered_model and mlflow_delete_registered_model for R to create/delete registered models.

    • Add support for setting tags while resuming a run (#4497, @dbczumar)

    • MLflow UI updates (#4490, @sunishsheth2009)

      • Add framework for internationalization support.
      • Move metric columns before parameter and tag columns in the runs table.
      • Change the display format of run start time to elapsed time (e.g. 3 minutes ago) from timestamp (e.g. 2021-07-14 14:02:10) in the runs table.

    Bug fixes and documentation updates:

    • Fix a bug causing MLflow UI to crash when sorting a column containing both NaN and empty values (#3409, @harupy)

    Small bug fixes and doc updates (#4541, #4534, #4533, #4517, #4508, #4513, #4512, #4509, #4503, #4486, #4493, #4469, @harupy; #4458, @KasirajanA; #4501, @jimmyxu-db; #4521, #4515, @jerrylian-db; #4359, @shrinath-suresh; #4544, @WeichenXu123; #4549, @smurching; #4554, @derkomai; #4506, @tomasatdatabricks; #4551, #4516, #4494, @dbczumar; #4511, @keypointt)

    Source code(tar.gz)
    Source code(zip)
  • v1.18.0(Jun 18, 2021)

    MLflow 1.18.0 includes the following features and improvements:

    Features:

    • Autologging performance improvements for XGBoost, LightGBM, and scikit-learn (#4416, #4473, @dbczumar)
    • Add new PaddlePaddle flavor to MLflow Models (#4406, #4439, @jinminhao)
    • Introduce paginated ListExperiments API (#3881, @wamartin-aml)
    • Include Runtime version for MLflow Models logged on Databricks (#4421, @stevenchen-db)
    • MLflow Models now log dependencies in pip requirements.txt format, in addition to existing conda format (#4409, #4422, @stevenchen-db)
    • Add support for limiting the number child runs created by autologging for scikit-learn hyperparameter search models (#4382, @mohamad-arabi)
    • Improve artifact upload / download performance on Databricks (#4260, @dbczumar)
    • Migrate all model dependencies from conda to "pip" section (#4393, @WeichenXu123)

    Bug fixes and documentation updates:

    • Fix an MLflow UI bug that caused git source URIs to be rendered improperly (#4403, @takabayashi)
    • Fix a bug that prevented reloading of MLflow Models based on the TensorFlow SavedModel format (#4223) (#4319, @saschaschramm)
    • Fix a bug in the behavior of KubernetesSubmittedRun.get_status() for Kubernetes MLflow Project runs (#3962) (#4159, @jcasse)
    • Fix a bug in TLS verification for MLflow artifact operations on S3 (#4047, @PeterSulcs)
    • Fix a bug causing the MLflow server to crash after deletion of the default experiment (#4352, @asaf400)
    • Fix a bug causing mlflow models serve to crash on Windows 10 (#4377, @simonvanbernem)
    • Fix a crash in runs search when ordering by metric values against the MSSQL backend store (#2551) (#4238, @naor2013)
    • Fix an autologging incompatibility issue with TensorFlow 2.5 (#4371, @dbczumar)
    • Fix a bug in the disable_for_unsupported_versions autologging argument that caused library versions to be incorrectly compared (#4303, @WeichenXu123)

    Small bug fixes and doc updates (#4405, @mohamad-arabi; #4455, #4461, #4459, #4464, #4453, #4444, #4449, #4301, #4424, #4418, #4417, #3759, #4398, #4389, #4386, #4385, #4384, #4380, #4373, #4378, #4372, #4369, #4348, #4364, #4363, #4349, #4350, #4174, #4285, #4341, @harupy; #4446, @kHarshit; #4471, @AveshCSingh; #4435, #4440, #4368, #4360, @WeichenXu123; #4431, @apurva-koti; #4428, @stevenchen-db; #4467, #4402, #4261, @dbczumar)

    Source code(tar.gz)
    Source code(zip)
  • v1.17.0(May 8, 2021)

    MLflow 1.17.0 includes the following major features and improvements:

    Features:

    • Add support for hyperparameter-tuning models to mlflow.pyspark.ml.autolog() (#4270, @WeichenXu123)

    Bug fixes and documentation updates:

    • Fix PyTorch Lightning callback definition for compatibility with PyTorch Lightning 1.3.0 (#4333, @dbczumar)
    • Fix a bug in scikit-learn autologging that omitted artifacts for unsupervised models (#4325, @dbczumar)
    • Support logging datetime.date objects as part of model input examples (#4313, @vperiyasamy)
    • Implement HTTP request retries in the MLflow Java client for 500-level responses (#4311, @dbczumar)
    • Include a community code of conduct (#4310, @dennyglee)

    Small bug fixes and doc updates (#4276, #4263, @WeichenXu123; #4289, #4302, #3599, #4287, #4284, #4265, #4266, #4275, #4268, @harupy; #4335, #4297, @dbczumar; #4324, #4320, @tleyden)

    Source code(tar.gz)
    Source code(zip)
  • v1.16.0(Apr 27, 2021)

    MLflow 1.16.0 includes several major features and improvements:

    Features:

    • Add mlflow.pyspark.ml.autolog() API for autologging of pyspark.ml estimators (#4228, @WeichenXu123)
    • Add mlflow.catboost.log_model, mlflow.catboost.save_model, mlflow.catboost.load_model APIs for CatBoost model persistence (#2417, @harupy)
    • Enable mlflow.pyfunc.spark_udf to use column names from model signature by default (#4236, @Loquats)
    • Add datetime data type for model signatures (#4241, @vperiyasamy)
    • Add mlflow.sklearn.eval_and_log_metrics API that computes and logs metrics for the given scikit-learn model and labeled dataset. (#4218, @alkispoly-db)

    Bug fixes and documentation updates:

    • Fix a database migration error for PostgreSQL (#4211, @dolfinus)
    • Fix autologging silent mode bugs (#4231, @dbczumar)

    Small bug fixes and doc updates (#4255, #4252, #4254, #4253, #4242, #4247, #4243, #4237, #4233, @harupy; #4225, @dmatrix; #4206, @mlflow-automation; #4207, @shrinath-suresh; #4264, @WeichenXu123; #3884, #3866, #3885, @ankan94; #4274, #4216, @dbczumar)

    Source code(tar.gz)
    Source code(zip)
  • v1.15.0(Mar 26, 2021)

    1.15.0 (2021-03-26)

    MLflow 1.15.0 includes several features, bug fixes and improvements. Notably, it includes a number of improvements to MLflow autologging:

    Features:

    • Add silent=False option to all autologging APIs, to allow suppressing MLflow warnings and logging statements during autologging setup and training (#4173, @dbczumar)
    • Add disable_for_unsupported_versions=False option to all autologging APIs, to disable autologging for versions of ML frameworks that have not been explicitly tested against the current version of the MLflow client (#4119, @WeichenXu123)

    Bug fixes:

    • Autologged runs are now terminated when execution is interrupted via SIGINT (#4200, @dbczumar)
    • The R mlflow_get_experiment API now returns the same tag structure as mlflow_list_experiments and mlflow_get_run (#4017, @lorenzwalthert)
    • Fix bug where mlflow.tensorflow.autolog would previously mutate the user-specified callbacks list when fitting tf.keras models (#4195, @dbczumar)
    • Fix bug where SQL-backed MLflow tracking server initialization failed when using the MLflow skinny client (#4161, @eedeleon)
    • Model version creation (e.g. via mlflow.register_model) now fails if the model version status is not READY (#4114, @ankit-db)

    Small bug fixes and doc updates (#4191, #4149, #4162, #4157, #4155, #4144, #4141, #4138, #4136, #4133, #3964, #4130, #4118, @harupy; #4152, @mlflow-automation; #4139, @WeichenXu123; #4193, @smurching; #4029, @architkulkarni; #4134, @xhochy; #4116, @wenleix; #4160, @wentinghu; #4203, #4184, #4167, @dbczumar)

    Source code(tar.gz)
    Source code(zip)
  • v1.14.1(Mar 2, 2021)

    MLflow 1.14.1 is a patch release containing the following bug fix:

    • Fix issues in handling flexible numpy datatypes in TensorSpec (#4147, @arjundc-db)
    Source code(tar.gz)
    Source code(zip)
  • v1.14.0(Feb 20, 2021)

    We are happy to announce the availability of MLflow 1.14.0!

    In addition to bug and documentation fixes, MLflow 1.14.0 includes the following features and improvements:

    Python 3.5 has been deprecated

    MLflow support for Python 3.5 is deprecated and will be dropped in an upcoming release. At that point, existing Python 3.5 workflows that use MLflow will continue to work without modification, but Python 3.5 users will no longer get access to the latest MLflow features and bugfixes. We recommend that you upgrade to Python 3.6 or newer.

    Features and improvements

    • MLflow's model inference APIs (mlflow.pyfunc.predict), built-in model serving tools (mlflow models serve), and model signatures now support tensor inputs. In particular, MLflow now provides built-in support for scoring PyTorch, TensorFlow, Keras, ONNX, and Gluon models with tensor inputs. For more information, see https://mlflow.org/docs/latest/models.html#deploy-mlflow-models (#3808, #3894, #4084, #4068 @wentinghu; #4041 @tomasatdatabricks, #4099, @arjundc-db)
    • Add new mlflow.shap.log_explainer, mlflow.shap.load_explainer APIs for logging and loading shap.Explainer instances (#3989, @vivekchettiar)
    • The MLflow Python client is now available with a reduced dependency set via the mlflow-skinny PyPI package (#4049, @eedeleon)
    • Add new RequestHeaderProvider plugin interface for passing custom request headers with REST API requests made by the MLflow Python client (#4042, @jimmyxu-db)
    • mlflow.keras.log_model now saves models in the TensorFlow SavedModel format by default instead of the older Keras H5 format (#4043, @harupy)
    • mlflow_log_model now supports logging MLeap models in R (#3819, @yitao-li)
    • Add mlflow.pytorch.log_state_dict, mlflow.pytorch.load_state_dict for logging and loading PyTorch state dicts (#3705, @shrinath-suresh)
    • mlflow gc can now garbage-collect artifacts stored in S3 (#3958, @sklingel)

    Bug fixes and documentation updates:

    • Enable autologging for TensorFlow estimators that extend tensorflow.compat.v1.estimator.Estimator (#4097, @mohamad-arabi)
    • Fix for universal autolog configs overriding integration-specific configs (#4093, @dbczumar)
    • Allow mlflow.models.infer_signature to handle dataframes containing pandas.api.extensions.ExtensionDtype (#4069, @caleboverman)
    • Fix bug where mlflow_restore_run doesn't propagate the client parameter to mlflow_get_run (#4003, @yitao-li)
    • Fix bug where scoring on served model fails when request data contains a string that looks like URL and pandas version is later than 1.1.0 (#3921, @Secbone)
    • Fix bug causing mlflow_list_experiments to fail listing experiments with tags (#3942, @lorenzwalthert)
    • Fix bug where metrics plots are computed from incorrect target values in scikit-learn autologging (#3993, @mtrencseni)
    • Remove redundant / verbose Python event logging message in autologging (#3978, @dbczumar)
    • Fix bug where mlflow_load_model doesn't load metadata associated to MLflow model flavor in R (#3872, @yitao-li)
    • Fix mlflow.spark.log_model, mlflow.spark.load_model APIs on passthrough-enabled environments against ACL'd artifact locations (#3443, @smurching)

    Small bug fixes and doc updates:

    (#4102, #4101, #4096, #4091, #4067, #4059, #4016, #4054, #4052, #4051, #4038, #3992, #3990, #3981, #3949, #3948, #3937, #3834, #3906, #3774, #3916, #3907, #3938, #3929, #3900, #3902, #3899, #3901, #3891, #3889, @harupy; #4014, #4001, @dmatrix; #4028, #3957, @dbczumar; #3816, @lorenzwalthert; #3939, @pauldj54; #3740, @jkthompson; #4070, #3946, @jimmyxu-db; #3836, @t-henri; #3982, @neo-anderson; #3972, #3687, #3922, @eedeleon; #4044, @WeichenXu123; #4063, @yitao-li; #3976, @whiteh; #4110, @tomasatdatabricks; #4050, @apurva-koti; #4100, #4084, @wentinghu; #3947, @vperiyasamy; #4021, @trangevi; #3773, @ankan94; #4090, @jinzhang21; #3918, @danielfrg)

    Source code(tar.gz)
    Source code(zip)
  • v1.13.1(Dec 31, 2020)

    MLflow 1.13.1 is a patch release containing bug fixes and small changes:

    • Fix bug causing Spark autologging to ignore configuration options specified by mlflow.autolog() (#3917, @dbczumar)
    • Fix bugs causing metrics to be dropped during TensorFlow autologging (#3913, #3914, @dbczumar)
    • Fix incorrect value of optimizer name parameter in autologging PyTorch Lightning (#3901, @harupy)
    • Fix model registry database allow_null_for_run_id migration failure affecting MySQL databases (#3836, @t-henri)
    • Fix failure in transition_model_version_stage when uncanonical stage name is passed (#3929, @harupy)
    • Fix an undefined variable error causing AzureML model deployment to fail (#3922, @eedeleon)
    • Reclassify scikit-learn as a pip dependency in MLflow Model conda environments (#3896, @harupy)
    • Fix experiment view crash and artifact view inconsistency caused by artifact URIs with redundant slashes (#3928, @dbczumar)
    Source code(tar.gz)
    Source code(zip)
  • v1.13.0(Dec 25, 2020)

    We are happy to announce the availability of MLflow 1.13.0!

    Note: The MLflow R package for 1.13.0 is not yet available on CRAN because CRAN's submission system will be offline until January 4.

    In addition to bug and documentation fixes, MLflow 1.13.0 includes the following features and improvements:

    Features:

    New fluent APIs for logging in-memory objects as artifacts:

    • Add mlflow.log_text which logs text as an artifact (#3678, @harupy)
    • Add mlflow.log_dict which logs a dictionary as an artifact (#3685, @harupy)
    • Add mlflow.log_figure which logs a figure object as an artifact (#3707, @harupy)
    • Add mlflow.log_image which logs an image object as an artifact (#3728, @harupy)

    UI updates / fixes:

    • Add model version link in compact experiment table view
    • Add logged/registered model links in experiment runs page view
    • Enhance artifact viewer for MLflow models
    • Model registry UI settings are now persisted across browser sessions
    • Add model version description field to model version table

    (all merged in #3867, @smurching)

    Autologging enhancements:

    • Improve robustness of autologging integrations to exceptions (#3682, #3815, dbczumar; #3860, @mohamad-arabi; #3854, #3855, #3861, @harupy)
    • Add disable configuration option for autologging (#3682, #3815, dbczumar; #3838, @mohamad-arabi; #3854, #3855, #3861, @harupy)
    • Add exclusive configuration option for autologging (#3851, @apurva-koti; #3869, @dbczumar)
    • Add log_models configuration option for autologging (#3663, @mohamad-arabi)
    • Set tags on autologged runs for easy identification (and add tags to start_run) (#3847, @dbczumar)

    More features and improvements:

    • Allow Keras models to be saved with SavedModel format (#3552, @skylarbpayne)
    • Add support for statsmodels flavor (#3304, @olbapjose)
    • Add support for nested-run in mlflow R client (#3765, @yitao-li)
    • Deploying a model using mlflow.azureml.deploy now integrates better with the AzureML tracking/registry. (#3419, @trangevi)
    • Update schema enforcement to handle integers with missing values (#3798, @tomasatdatabricks)

    Bug fixes and documentation updates:

    • When running an MLflow Project on Databricks, the version of MLflow installed on the Databricks cluster will now match the version used to run the Project (#3880, @FlorisHoogenboom)
    • Fix bug where metrics are not logged for single-epoch tf.keras training sessions (#3853, @dbczumar)
    • Reject boolean types when logging MLflow metrics (#3822, @HCoban)
    • Fix alignment of Keras / tf.Keras metric history entries when initial_epoch is different from zero. (#3575, @garciparedes)
    • Fix bugs in autologging integrations for newer versions of TensorFlow and Keras (#3735, @dbczumar)
    • Drop global filterwwarnings module at import time (#3621, @jogo)
    • Fix bug that caused preexisting Python loggers to be disabled when using MLflow with the SQLAlchemyStore (#3653, @arthury1n)
    • Fix h5py library incompatibility for exported Keras models (#3667, @tomasatdatabricks)

    Small changes, bug fixes and doc updates (#3887, #3882, #3845, #3833, #3830, #3828, #3826, #3825, #3800, #3809, #3807, #3786, #3794, #3731, #3776, #3760, #3771, #3754, #3750, #3749, #3747, #3736, #3701, #3699, #3698, #3658, #3675, @harupy; #3723, @mohamad-arabi; #3650, #3655, @shrinath-suresh; #3850, #3753, #3725, @dmatrix; ##3867, #3670, #3664, @smurching; #3681, @sueann; #3619, @andrewnitu; #3837, @javierluraschi; #3721, @szczeles; #3653, @arthury1n; #3883, #3874, #3870, #3877, #3878, #3815, #3859, #3844, #3703, @dbczumar; #3768, @wentinghu; #3784, @HCoban; #3643, #3649, @arjundc-db; #3864, @AveshCSingh, #3756, @yitao-li)

    Source code(tar.gz)
    Source code(zip)
  • v1.12.1(Nov 19, 2020)

    MLflow 1.12.1 is a patch release containing bug fixes and small changes:

    • Fix run_link for cross-workspace model versions (#3681, @sueann)
    • Remove hard dependency on matplotlib for sklearn autologging (#3703, @dbczumar)
    • Do not disable existing loggers when initializing alembic (#3653, @arthury1n)
    Source code(tar.gz)
    Source code(zip)
Owner
MLflow
Open source platform for the machine learning lifecycle
MLflow
RedNotebook is a cross-platform journal

RedNotebook RedNotebook is a modern desktop journal. It lets you format, tag and search your entries. You can also add pictures, links and customizabl

Jendrik Seipp 417 Dec 28, 2022
Fava - web interface for Beancount

Fava is a web interface for the double-entry bookkeeping software Beancount with a focus on features and usability. Check out the online demo and lear

1.5k Dec 30, 2022
:books: Web app for browsing, reading and downloading eBooks stored in a Calibre database

About Calibre-Web is a web app providing a clean interface for browsing, reading and downloading eBooks using an existing Calibre database. This softw

Jan B 8.2k Jan 02, 2023
A collection of self-contained and well-documented issues for newcomers to start contributing with

fedora-easyfix A collection of self-contained and well-documented issues for newcomers to start contributing with How to setup the local development e

Akashdeep Dhar 8 Oct 16, 2021
A :baby: buddy to help caregivers track sleep, feedings, diaper changes, and tummy time to learn about and predict baby's needs without (as much) guess work.

Baby Buddy A buddy for babies! Helps caregivers track sleep, feedings, diaper changes, tummy time and more to learn about and predict baby's needs wit

Baby Buddy 1.5k Jan 02, 2023
Find duplicate files

dupeGuru dupeGuru is a cross-platform (Linux, OS X, Windows) GUI tool to find duplicate files in a system. It is written mostly in Python 3 and has th

Andrew Senetar 3.3k Jan 04, 2023
cherrytree

CherryTree A hierarchical note taking application, featuring rich text and syntax highlighting, storing data in a single XML or SQLite file. The proje

Giuseppe Penone 2.7k Jan 08, 2023
Scan, index, and archive all of your paper documents

[ en | de | el ] Important news about the future of this project It's been more than 5 years since I started this project on a whim as an effort to tr

Paperless 7.8k Jan 06, 2023
Main repository of the zim desktop wiki project

Zim - A Desktop Wiki Editor Zim is a graphical text editor used to maintain a collection of wiki pages. Each page can contain links to other pages, si

Zim Desktop Wiki 1.6k Dec 30, 2022
Automatic music downloader for SABnzbd

Headphones Headphones is an automated music downloader for NZB and Torrent, written in Python. It supports SABnzbd, NZBget, Transmission, µTorrent, De

3.2k Dec 31, 2022
Source code for Gramps Genealogical program

The Gramps Project ( https://gramps-project.org ) We strive to produce a genealogy program that is both intuitive for hobbyists and feature-complete f

Gramps Project 1.6k Jan 08, 2023
Automatic Video Library Manager for TV Shows. It watches for new episodes of your favorite shows, and when they are posted it does its magic.

Automatic Video Library Manager for TV Shows. It watches for new episodes of your favorite shows, and when they are posted it does its magic. Exclusiv

pyMedusa 1.5k Dec 30, 2022
Agile project management platform. Built on top of Django and AngularJS

Taiga Backend Documentation Currently, we have authored three main documentation hubs: API: Our API documentation and reference for developing from Ta

Taiga.io 5.8k Jan 05, 2023
Open source platform for the machine learning lifecycle

MLflow: A Machine Learning Lifecycle Platform MLflow is a platform to streamline machine learning development, including tracking experiments, packagi

MLflow 13.3k Jan 04, 2023
The official source code repository for the calibre ebook manager

calibre calibre is an e-book manager. It can view, convert, edit and catalog e-books in all of the major e-book formats. It can also talk to e-book re

Kovid Goyal 14.1k Dec 27, 2022
This is your launchpad that comes with a variety of applications waiting to run on your kubernetes cluster with a single click

This is your launchpad that comes with a variety of applications waiting to run on your kubernetes cluster with a single click.

M. Rehan 2 Jun 26, 2022
Small and highly customizable twin-panel file manager for Linux with support for plugins.

Note: Prefered repository hosting is GitLab. If you don't have an account there and don't wish to make one interacting with one on GitHub is fine. Sun

Mladen Mijatov 407 Dec 29, 2022
Automatic Movie Downloading via NZBs & Torrents

CouchPotato CouchPotato (CP) is an automatic NZB and torrent downloader. You can keep a "movies I want"-list and it will search for NZBs/torrents of t

CouchPotato 3.9k Jan 04, 2023
SENAITE Meta Package

SENAITE LIMS Meta Installation Package What does SENAITE mean? SENAITE is a beautiful trigonal, oil-green to greenish black crystal, with almost the h

SENAITE 135 Dec 14, 2022
One webpage for every book ever published!

Open Library Open Library is an open, editable library catalog, building towards a web page for every book ever published. Are you looking to get star

Internet Archive 4k Jan 08, 2023