Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable.

Overview

Coverage Status SDK: Documentation Status

Overview of the Kubeflow pipelines service

Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable.

Kubeflow pipelines are reusable end-to-end ML workflows built using the Kubeflow Pipelines SDK.

The Kubeflow pipelines service has the following goals:

  • End to end orchestration: enabling and simplifying the orchestration of end to end machine learning pipelines
  • Easy experimentation: making it easy for you to try numerous ideas and techniques, and manage your various trials/experiments.
  • Easy re-use: enabling you to re-use components and pipelines to quickly cobble together end to end solutions, without having to re-build each time.

Installation

  • Install Kubeflow Pipelines from choices described in Installation Options for Kubeflow Pipelines.

  • [Alpha] Starting from Kubeflow Pipelines 1.7, try out Emissary Executor. Emissary executor is Container runtime agnostic meaning you are able to run Kubeflow Pipelines on Kubernetes cluster with any Container runtimes. The default Docker executor depends on Docker container runtime, which will be deprecated on Kubernetes 1.20+.

Documentation

Get started with your first pipeline and read further information in the Kubeflow Pipelines overview.

See the various ways you can use the Kubeflow Pipelines SDK.

See the Kubeflow Pipelines API doc for API specification.

Consult the Python SDK reference docs when writing pipelines using the Python SDK.

Refer to the versioning policy and feature stages documentation for more information about how we manage versions and feature stages (such as Alpha, Beta, and Stable).

Contributing to Kubeflow Pipelines

Before you start contributing to Kubeflow Pipelines, read the guidelines in How to Contribute. To learn how to build and deploy Kubeflow Pipelines from source code, read the developer guide.

Kubeflow Pipelines Community Meeting

The meeting is happening every other Wed 10-11AM (PST) Calendar Invite or Join Meeting Directly

Meeting notes

Kubeflow Pipelines Slack Channel

#kubeflow-pipelines

Blog posts

Acknowledgments

Kubeflow pipelines uses Argo by default under the hood to orchestrate Kubernetes resources. The Argo community has been very supportive and we are very grateful. Additionally there is Tekton backend available as well. To access it, please refer to Kubeflow Pipelines with Tekton repository.

Comments
  • [Multi User] failed to call `kfp.Client().create_run_from_pipeline_func` in in-cluster juypter notebook

    [Multi User] failed to call `kfp.Client().create_run_from_pipeline_func` in in-cluster juypter notebook

    What steps did you take:

    In a multi-user enabled env, I created a notebook server on user's namespace, launch a notebook and try to call Python SDK from there. When I execute the code below:

    pipeline = kfp.Client().create_run_from_pipeline_func(mnist_pipeline, arguments={}, namespace='mynamespace')
    

    What happened:

    The API call was rejected with the following errors:

    ~/.local/lib/python3.6/site-packages/kfp_server_api/rest.py in request(self, method, url, query_params, headers, body, post_params, _preload_content, _request_timeout)
        236 
        237         if not 200 <= r.status <= 299:
    --> 238             raise ApiException(http_resp=r)
        239 
        240         return r
    
    ApiException: (403)
    Reason: Forbidden
    HTTP response headers: HTTPHeaderDict({'content-length': '19', 'content-type': 'text/plain', 'date': 'Tue, 01 Sep 2020 00:58:39 GMT', 'server': 'envoy', 'x-envoy-upstream-service-time': '8'})
    HTTP response body: RBAC: access denied
    

    What did you expect to happen:

    A pipeline run should be created and executed

    Environment:

    How did you deploy Kubeflow Pipelines (KFP)?

    I installed the KFP on IKS with multi-user support KFP version: v1.1.0 KFP SDK version: v1.0.0

    Anything else you would like to add:

    [Miscellaneous information that will assist in solving the issue.]

    /kind bug

    kind/feature 
    opened by yhwang 127
  • WIP: test oss prow configuration

    WIP: test oss prow configuration

    opened by Bobgy 83
  • feat(compiler): add dsl operation for parallelism on sub dag level

    feat(compiler): add dsl operation for parallelism on sub dag level

    Description of your changes: This PR adds parallelism limits for sub-DAG:s. This is a continuation (https://github.com/kubeflow/pipelines/pull/4149) which relate to issue. Checklist:

    • [ ] The title for your pull request (PR) should follow our title convention. Learn more about the pull request title convention used in this repository.

      PR titles examples:

      • fix(frontend): fixes empty page. Fixes #1234 Use fix to indicate that this PR fixes a bug.
      • feat(backend): configurable service account. Fixes #1234, fixes #1235 Use feat to indicate that this PR adds a new feature.
      • chore: set up changelog generation tools Use chore to indicate that this PR makes some changes that users don't need to know.
      • test: fix CI failure. Part of #1234 Use part of to indicate that a PR is working on an issue, but shouldn't close the issue when merged.
    • [ ] Do you want this pull request (PR) cherry-picked into the current release branch?

      If yes, use one of the following options:

      • (Recommended.) Ask the PR approver to add the cherrypick-approved label to this PR. The release manager adds this PR to the release branch in a batch update.
      • After this PR is merged, create a cherry-pick PR to add these changes to the release branch. (For more information about creating a cherry-pick PR, see the Kubeflow Pipelines release guide.)
    lgtm approved size/L cla: yes 
    opened by NikeNano 70
  • Multi-User support for Kubeflow Pipelines

    Multi-User support for Kubeflow Pipelines

    [April/6/2020] Latest design is in https://docs.google.com/document/d/1R9bj1uI0As6umCTZ2mv_6_tjgFshIKxkSt00QLYjNV4/edit?ts=5e4d8fbb#heading=h.5s8rbufek1ax

    Areas we are working on:

    • [x] [Frontend] Deploy ui artifact service for each namespace https://github.com/kubeflow/pipelines/issues/3554
    • [x] [Frontend/Backend] Deploy visualization service for each namespace https://github.com/kubeflow/pipelines/issues/2899
    • [x] [Backend] Use experiment for resource boundary for child resource CRUD https://github.com/kubeflow/pipelines/issues/2397
      • [x] Experiment https://github.com/kubeflow/pipelines/issues/3273
      • [x] Run https://github.com/kubeflow/pipelines/issues/3336
      • [x] Job https://github.com/kubeflow/pipelines/issues/3344
    • [x] [Frontend/SDK/Backend] Skip specify namespace for CreateRun APIs https://github.com/kubeflow/pipelines/issues/3290
    • [x] [Deployment] Enable MLMD functionality in multi-user mode https://github.com/kubeflow/pipelines/issues/3292
    • [x] [Frontend] Block non public api from frontend (e.g. report api) in multi-user mode https://github.com/kubeflow/pipelines/issues/3293
    • [x] [Frontend/Controller] Launch Tensorboard in user's namespace https://github.com/kubeflow/pipelines/issues/3294
    • [x] [Frontend] Pass namespace as a parameter for experiment API https://github.com/kubeflow/pipelines/issues/3291
    • [x] [Frontend] Pass namespace as a parameter for run API https://github.com/kubeflow/pipelines/pull/3351
    • [x] [Frontend] UI should react when user changes namespace https://github.com/kubeflow/pipelines/issues/3296
    • [x] [SDK] Pass namespace as a parameter for experiment APIs https://github.com/kubeflow/pipelines/pull/3272
    • [x] [Deployment] KFP profile controller that configures KFP required resources in each user's namespaces https://github.com/kubeflow/pipelines/issues/3420
    • [ ] [Test] Postsubmit test for multi user e2e scenario https://github.com/kubeflow/pipelines/issues/3288
    • [ ] [Test] Backend integration tests for multi-user scenarios https://github.com/kubeflow/pipelines/issues/3289
    • [ ] [Test] Network auth integration tests https://github.com/kubeflow/pipelines/issues/3646
    • [x] [Deployment] Make user identity header configurable #3752
    • [x] [Doc] documentation on kubeflow.org #4317

    Release

    • [x] How do we release KFP multi user mode? https://github.com/kubeflow/pipelines/issues/3645
    • [x] Multi user mode early access release #3693
    • [x] [Deployment] Merge changes to upstream kubeflow repo https://github.com/kubeflow/pipelines/issues/3241
    • [x] Integrate with platforms other than GCP https://github.com/kubeflow/manifests/issues/1364

    Areas related to integration with Kubeflow

    • [ ] [Central Dashboard] Manage contributors for all namespaces I own https://github.com/kubeflow/kubeflow/issues/4569
    • [x] [Central Dashboard] Support login to Kubeflow cluster without creating his/her namespace for a non-admin contributor https://github.com/kubeflow/kubeflow/issues/4889
    • [ ] [Profile CRD] Support more than one owner of a profile CR https://github.com/kubeflow/kubeflow/issues/4888
    • [ ] [Profile CRD] Support updating the owner of a profile https://github.com/kubeflow/kubeflow/issues/4890

    =============== original description

    Some users express the interest of an isolation between the cluster admin and cluster user - Cluster admin deploy Kubeflow Pipelines as part of Kubeflow in the cluster; Cluster user can use Kubeflow Pipelines functionalities, without being able to access the control plane.

    Here are the steps to support this functionality.

    1. Provision control plane in one namespace, and launch argo workflow instances in another
      • provision control plane in kubeflow namespace, and argo job in namespace FOO (parameterization)
      • API server should update the incoming workflow definition to namespace FOO. Sample code that API server modify the workflow
    2. Currently all workflows are run under a clusterrole pipeline-runner (definition). And it's specified during compilation (link). Instead, it should run the workflows under a role instead of a clusterrole.
      • change pipeline-runner to role, and specify the namespace during deployment (expose as deployment parameter)
      • API server should update the incoming workflow definition to use pipeline-runner role.
    3. Cluster user can access UI through IAP/SimpleAuth endpoint, instead of port-forwarding.
    help wanted priority/p1 area/frontend area/backend kind/feature area/wide-impact status/triaged 
    opened by IronPan 67
  • Support for non-docker based deployments

    Support for non-docker based deployments

    Do you think it would be possible to support non-docker based clusters as well? I'm currently checking out the examples and see that they want to mount the docker.sock into the container. We might achieve the same results when using crictl. WDYT?

    priority/p1 area/development 
    opened by saschagrunert 63
  • feat(backend): Added multi-user pipelines API. Fixes #4197

    feat(backend): Added multi-user pipelines API. Fixes #4197

    Added namespaced pipelines, with UI and API changes, as well as the ability to share pipelines.

    Fixes: https://github.com/kubeflow/pipelines/issues/4197

    Description of your changes:

    • Added a new field in Pipelines table for namespace.
    • Uploaded Pipelines are by default namespaced.
    • Ability to share Pipelines by selecting "shared" check-mark in the UI.
    • Authorization via SubjectAccessReview for Pipelines, PipelinesVersions, and Upload Pipelines endpoints.

    Authors: @arllanos @maganaluis

    lgtm approved size/XL ok-to-test cla: yes 
    opened by maganaluis 44
  • Configure Renovate

    Configure Renovate

    WhiteSource Renovate

    Welcome to Renovate! This is an onboarding PR to help you understand and configure settings before regular Pull Requests begin.

    :vertical_traffic_light: To activate Renovate, merge this Pull Request. To disable Renovate, simply close this Pull Request unmerged.


    Detected Package Files

    • WORKSPACE (bazel)
    • backend/Dockerfile (dockerfile)
    • backend/Dockerfile.bazel (dockerfile)
    • backend/Dockerfile.cacheserver (dockerfile)
    • backend/Dockerfile.persistenceagent (dockerfile)
    • backend/Dockerfile.scheduledworkflow (dockerfile)
    • backend/Dockerfile.viewercontroller (dockerfile)
    • backend/Dockerfile.visualization (dockerfile)
    • backend/metadata_writer/Dockerfile (dockerfile)
    • backend/src/cache/deployer/Dockerfile (dockerfile)
    • components/gcp/container/Dockerfile (dockerfile)
    • components/kubeflow/deployer/Dockerfile (dockerfile)
    • components/kubeflow/dnntrainer/Dockerfile (dockerfile)
    • components/kubeflow/katib-launcher/Dockerfile (dockerfile)
    • components/kubeflow/kfserving/Dockerfile (dockerfile)
    • components/kubeflow/launcher/Dockerfile (dockerfile)
    • components/local/base/Dockerfile (dockerfile)
    • components/local/confusion_matrix/Dockerfile (dockerfile)
    • components/local/roc/Dockerfile (dockerfile)
    • components/sample/keras/train_classifier/Dockerfile (dockerfile)
    • contrib/components/openvino/model_convert/containers/Dockerfile (dockerfile)
    • contrib/components/openvino/ovms-deployer/containers/Dockerfile (dockerfile)
    • contrib/components/openvino/predict/containers/Dockerfile (dockerfile)
    • contrib/components/openvino/tf-slim/containers/Dockerfile (dockerfile)
    • frontend/Dockerfile (dockerfile)
    • manifests/gcp_marketplace/deployer/Dockerfile (dockerfile)
    • proxy/Dockerfile (dockerfile)
    • samples/contrib/image-captioning-gcp/src/Dockerfile (dockerfile)
    • samples/contrib/nvidia-resnet/components/inference_server_launcher/Dockerfile (dockerfile)
    • samples/contrib/nvidia-resnet/components/preprocess/Dockerfile (dockerfile)
    • samples/contrib/nvidia-resnet/components/train/Dockerfile (dockerfile)
    • samples/contrib/nvidia-resnet/components/webapp/Dockerfile (dockerfile)
    • samples/contrib/nvidia-resnet/components/webapp_launcher/Dockerfile (dockerfile)
    • samples/contrib/nvidia-resnet/pipeline/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/helloworld-ci-sample/helloworld/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/download_dataset/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/submit_result/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/train_model/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/visualize_html/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/visualize_table/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/mnist-ci-sample/tensorboard/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/mnist-ci-sample/train/Dockerfile (dockerfile)
    • test/api-integration-test/Dockerfile (dockerfile)
    • test/frontend-integration-test/Dockerfile (dockerfile)
    • test/frontend-integration-test/selenium-standalone-chrome-gcloud-nodejs.Docker/Dockerfile (dockerfile)
    • test/imagebuilder/Dockerfile (dockerfile)
    • test/images/Dockerfile (dockerfile)
    • test/initialization-test/Dockerfile (dockerfile)
    • test/sample-test/Dockerfile (dockerfile)
    • tools/bazel_builder/Dockerfile (dockerfile)
    • go.mod (gomod)
    • frontend/mock-backend/package.json (npm)
    • frontend/package.json (npm)
    • frontend/server/package.json (npm)
    • package.json (npm)
    • test/frontend-integration-test/package.json (npm)
    • frontend/.nvmrc (nvm)
    • backend/metadata_writer/requirements.txt (pip_requirements)
    • backend/requirements.txt (pip_requirements)
    • backend/src/apiserver/visualization/requirements.txt (pip_requirements)
    • components/kubeflow/katib-launcher/requirements.txt (pip_requirements)
    • contrib/components/openvino/ovms-deployer/containers/requirements.txt (pip_requirements)
    • docs/requirements.txt (pip_requirements)
    • samples/contrib/azure-samples/databricks-pipelines/requirements.txt (pip_requirements)
    • samples/contrib/ibm-samples/ffdl-seldon/source/seldon-pytorch-serving-image/requirements.txt (pip_requirements)
    • samples/core/ai_platform/training/requirements.txt (pip_requirements)
    • samples/core/container_build/requirements.txt (pip_requirements)
    • sdk/python/requirements.txt (pip_requirements)
    • test/kfp-functional-test/requirements.txt (pip_requirements)
    • test/sample-test/requirements.txt (pip_requirements)
    • components/gcp/container/component_sdk/python/setup.py (pip_setup)
    • components/kubeflow/dnntrainer/src/setup.py (pip_setup)
    • samples/core/ai_platform/training/setup.py (pip_setup)
    • sdk/python/setup.py (pip_setup)

    Configuration

    :abcd: Renovate has detected a custom config for this PR. Feel free to ask for help if you have any doubts and would like it reviewed.

    Important: Now that this branch is edited, Renovate can't rebase it from the base branch any more. If you make changes to the base branch that could impact this onboarding PR, please merge them manually.

    What to Expect

    With your current configuration, Renovate will create 57 Pull Requests:

    chore(deps): pin dependencies
    chore(deps): update gcr.io/inverting-proxy/agent docker digest to 9817c74
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-gcr.io-inverting-proxy-agent
    • Merge into: master
    • Upgrade gcr.io/inverting-proxy/agent to sha256:9817c740a3705e4bf889e612c071686a8cb3cfcfe9ad191c570a295c37316ff0
    chore(deps): update github.com/vividcortex/mysqlerr commit hash to 4c396ae
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-vividcortex-mysqlerr-digest
    • Merge into: master
    • Upgrade github.com/VividCortex/mysqlerr to 4c396ae82aacc60540048b4846438cec44a1c222
    chore(deps): update golang.org/x/net commit hash to 5f4716e
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/golang.org-x-net-digest
    • Merge into: master
    • Upgrade golang.org/x/net to 5f4716e94777e714bc2fb3e3a44599cb40817aac
    chore(deps): update google.golang.org/genproto commit hash to 646a494
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/google.golang.org-genproto-digest
    • Merge into: master
    • Upgrade google.golang.org/genproto to 646a494a81eaa116cb3e3978e5ac1278e35abfdd
    chore(deps): update docker patch updates docker tags (patch)
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-patch-docker-updates
    • Merge into: master
    • Upgrade golang to 1.13.15-stretch
    • Upgrade tensorflow/tensorflow to 2.0.4-py3
    • Upgrade tensorflow/tensorflow to 2.2.2
    chore(deps): update go.mod dependencies (patch)
    fix(deps): update npm dependencies (patch)
    chore(deps): update alpine docker tag to v3.13
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-alpine-3.x
    • Merge into: master
    • Upgrade alpine to 3.13
    chore(deps): update gcr.io/cloud-marketplace-tools/k8s/deployer_helm/onbuild docker tag to v0.10.10
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-gcr.io-cloud-marketplace-tools-k8s-deployer_helm-onbuild-0.x
    • Merge into: master
    • Upgrade gcr.io/cloud-marketplace-tools/k8s/deployer_helm/onbuild to 0.10.10
    chore(deps): update go.mod dependencies (minor)
    chore(deps): update golang docker tag
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-golang-1.x
    • Merge into: master
    • Upgrade golang to 1.15.7
    • Upgrade golang to 1.15.7-alpine3.12
    • Upgrade golang to 1.14.14-stretch
    chore(deps): update node.js
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/node-12.x
    • Merge into: master
    • Upgrade node to 12.20.1
    • Upgrade node to 12.20.1-alpine
    chore(deps): update npm dependencies (minor)
    chore(deps): update nvcr.io/nvidia/tensorflow docker tag to v19.10
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-nvcr.io-nvidia-tensorflow-19.x
    • Merge into: master
    • Upgrade nvcr.io/nvidia/tensorflow to 19.10-py3
    chore(deps): update python docker tag to v3.9
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-python-3.x
    • Merge into: master
    • Upgrade python to 3.9-slim
    • Upgrade python to 3.9
    chore(deps): update tensorflow/tensorflow docker tag
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-tensorflow-tensorflow-2.x
    • Merge into: master
    • Upgrade tensorflow/tensorflow to 2.2.2-py3
    • Upgrade tensorflow/tensorflow to 2.4.1
    chore(deps): update dependency @testing-library/react to v11
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/testing-library-react-11.x
    • Merge into: master
    • Upgrade @testing-library/react to 11.2.3
    chore(deps): update dependency @​types/jest to v26
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/jest-26.x
    • Merge into: master
    • Upgrade @types/jest to 26.0.20
    chore(deps): update dependency @​types/react to v17
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-17.x
    • Merge into: master
    • Upgrade @types/react to 17.0.0
    chore(deps): update dependency @​types/react-dom to v17
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-dom-17.x
    • Merge into: master
    • Upgrade @types/react-dom to 17.0.0
    chore(deps): update dependency @​types/react-router-dom to v5
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-router-dom-5.x
    • Merge into: master
    • Upgrade @types/react-router-dom to 5.1.7
    chore(deps): update dependency @​types/react-test-renderer to v17
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-test-renderer-17.x
    • Merge into: master
    • Upgrade @types/react-test-renderer to 17.0.0
    chore(deps): update dependency @​types/tar-stream to v2
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/tar-stream-2.x
    • Merge into: master
    • Upgrade @types/tar-stream to 2.2.0
    chore(deps): update dependency jest to v26
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/major-jest-monorepo
    • Merge into: master
    • Upgrade jest to 26.6.3
    chore(deps): update dependency prettier to v2
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/prettier-2.x
    • Merge into: master
    • Upgrade prettier to 2.2.1
    • Upgrade @types/prettier to 2.1.6
    chore(deps): update dependency react-scripts to v4
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-scripts-4.x
    • Merge into: master
    • Upgrade react-scripts to 4.0.1
    chore(deps): update dependency standard-version to v9
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/standard-version-9.x
    • Merge into: master
    • Upgrade standard-version to 9.1.0
    chore(deps): update dependency supertest to v6
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/supertest-6.x
    • Merge into: master
    • Upgrade supertest to 6.1.3
    chore(deps): update dependency ts-jest to v26
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/ts-jest-26.x
    • Merge into: master
    • Upgrade ts-jest to 26.5.0
    chore(deps): update dependency ts-node to v9
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/ts-node-9.x
    • Merge into: master
    • Upgrade ts-node to 9.1.1
    chore(deps): update dependency typescript to v4
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/typescript-4.x
    • Merge into: master
    • Upgrade typescript to 4.1.3
    chore(deps): update dependency webpack to v5
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/webpack-5.x
    • Merge into: master
    • Upgrade webpack to 5.19.0
    chore(deps): update dependency webpack-bundle-analyzer to v4
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/webpack-bundle-analyzer-4.x
    • Merge into: master
    • Upgrade webpack-bundle-analyzer to 4.4.0
    chore(deps): update module argoproj/argo to v2
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-argoproj-argo-2.x
    • Merge into: master
    • Upgrade github.com/argoproj/argo to 5f5150730c644865a5867bf017100732f55811dd
    chore(deps): update module cenkalti/backoff to v4
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-cenkalti-backoff-4.x
    • Merge into: master
    • Upgrade github.com/cenkalti/backoff to v4.1.0
    chore(deps): update module grpc-ecosystem/grpc-gateway to v2
    chore(deps): update module k8s.io/client-go to v12
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/k8s.io-client-go-12.x
    • Merge into: master
    • Upgrade k8s.io/client-go to v12.0.0
    chore(deps): update module masterminds/squirrel to v1
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-masterminds-squirrel-1.x
    • Merge into: master
    • Upgrade github.com/Masterminds/squirrel to d1a9a0e53225d7810c4f5e1136db32f4e360c5bb
    chore(deps): update module mattn/go-sqlite3 to v2
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-mattn-go-sqlite3-2.x
    • Merge into: master
    • Upgrade github.com/mattn/go-sqlite3 to v2.0.6
    chore(deps): update module minio/minio-go to v7
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-minio-minio-go-7.x
    • Merge into: master
    • Upgrade github.com/minio/minio-go to v7.0.7
    chore(deps): update module robfig/cron to v3
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-robfig-cron-3.x
    • Merge into: master
    • Upgrade github.com/robfig/cron to v3.0.1
    fix(deps): update dependency @google-cloud/storage to v5
    fix(deps): update dependency crypto-js to v4
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/crypto-js-4.x
    • Merge into: master
    • Upgrade crypto-js to ^4.0.0
    • Upgrade @types/crypto-js to 4.0.1
    fix(deps): update dependency d3 to v6
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/d3-6.x
    • Merge into: master
    • Upgrade d3 to 6.5.0
    • Upgrade @types/d3 to 6.3.0
    fix(deps): update dependency d3-dsv to v2
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/d3-dsv-2.x
    • Merge into: master
    • Upgrade d3-dsv to 2.0.0
    • Upgrade @types/d3-dsv to 2.0.1
    fix(deps): update dependency http-proxy-middleware to v1
    fix(deps): update dependency js-yaml to v4
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/js-yaml-4.x
    • Merge into: master
    • Upgrade js-yaml to 4.0.0
    • Upgrade @types/js-yaml to 4.0.0
    fix(deps): update dependency markdown-to-jsx to v7
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/markdown-to-jsx-7.x
    • Merge into: master
    • Upgrade markdown-to-jsx to 7.1.1
    fix(deps): update dependency mocha to v8
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/mocha-8.x
    • Merge into: master
    • Upgrade mocha to 8.2.1
    fix(deps): update dependency re-resizable to v6
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/re-resizable-6.x
    • Merge into: master
    • Upgrade re-resizable to 6.9.0
    fix(deps): update dependency react-ace to v9
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-ace-9.x
    • Merge into: master
    • Upgrade react-ace to 9.3.0
    fix(deps): update dependency react-dropzone to v11
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-dropzone-11.x
    • Merge into: master
    • Upgrade react-dropzone to 11.2.4
    fix(deps): update dependency react-router-dom to v5
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/major-reactrouter-monorepo
    • Merge into: master
    • Upgrade react-router-dom to 5.2.0
    fix(deps): update dependency webdriverio to v6
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/major-webdriverio-monorepo
    • Merge into: master
    • Upgrade webdriverio to 6.12.1
    fix(deps): update mui monorepo (major)
    fix(deps): update react monorepo to v17 (major)
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/major-react-monorepo
    • Merge into: master
    • Upgrade react to 17.0.1
    • Upgrade react-dom to 17.0.1
    • Upgrade react-test-renderer to 17.0.1

    :children_crossing: Branch creation will be limited to maximum 2 per hour, so it doesn't swamp any CI resources or spam the project. See docs for prhourlylimit for details.


    :question: Got questions? Check out Renovate's Docs, particularly the Getting Started section. If you need any further assistance then you can also request help here.


    This PR has been generated by WhiteSource Renovate. View repository job log here.

    lgtm approved size/M ok-to-test cla: yes 
    opened by renovate-bot 42
  • KFP sdk client authentication error

    KFP sdk client authentication error

    /kind bug

    What steps did you take and what happened: Enabled authentication with Azure AD on AKS and installing Kubeflow with kfctl_istio_dex.v1.1.0.yaml but skipping the dex from the manifest as Azure AD is an OIDC provider. The load balancer is exposed over https with TLS 1.3 self-signed cert.

    OIDC Auth Service Configuration:

    • client_id=XXXX
    • oidc_provider=https://login.microsoftonline.com/XXXX/v2.0
    • oidc_redirect_uri=https://XXXX/login/oidc
    • oidc_auth_url=https://login.microsoftonline.com/XXXX/oauth2/v2.0/authorize
    • application_secret=XXXX
    • skip_auth_uri=
    • namespace=istio-system
    • userid-header=kubeflow-userid
    • userid-prefix=

    Issue When using KFP client to upload the pipeline (client.pipeline_uploads.upload_pipeline()) with below client config throws an error.

    client = kfp.Client(host='https://<LoadBalancer IP Address>/pipeline', existing_token=<token>)

    Error HTTPSConnectionPool(host='<Host_IP>', port=443): Max retries exceeded with url: /pipeline/apis/v1beta1/pipelines/upload?name=local_exp-6714175b-6d59-40d0-9019-5b4ee58dc483 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1076)')))

    Is there a way to override cert verification?

    or

    When using KFP client to upload the pipeline (client.pipeline_uploads.upload_pipeline()) with below client config redirects to google auth error.

    client = kfp.Client(host='https://<LoadBalancer IP Address>/pipeline ,client_id=<client_id>, other_client_id=<client_id>,other_client_secret=<application_secret>,namespace='kfauth')

    image

    Environment:

    • Kubeflow version: v1.1.O
    • kfctl version: kfctl_v1.1.0-0-g9a3621e_linux.tar.gz
    • kfp version: 1.0.1
    • python version: 3.6.8
    • kfp-server-api version: 1.0.1
    • Kubernetes platform: Azure Kubernetes Service
    • Kubernetes version: 1.17.11

    CC: @Bobgy

    area/sdk/client kind/feature 
    opened by sudivate 42
  • fix(backend): remove Bazel from building the API. Part of #3250

    fix(backend): remove Bazel from building the API. Part of #3250

    Description of your changes: Remove Bazel from the api generations, this is part of https://github.com/kubeflow/pipelines/issues/3250. The suggested solution is based upon work from: https://github.com/kubeflow/pipelines/pull/4393.

    The suggested solution uses docker to make it simpler for users to not have to install all the necessary tools and environments locally, but not sure it is the best solution. Post this as an early draft in order to discuss possible solutions.

    Checklist:

    lgtm approved size/XXL cla: yes 
    opened by NikeNano 41
  • Update kubeflow/manifests to ship correct version of KFP in 0.7? 1.31

    Update kubeflow/manifests to ship correct version of KFP in 0.7? 1.31

    We are trying to finalize Kubeflow 0.7 by end of month.

    Which version of KFP should be shipped in 0.7.

    We are currently shipping KFP 0.1.23. It looks like this is about 1 month old. It looks like there was a fairly recent release 0.1.31 https://github.com/kubeflow/pipelines/releases

    @IronPan @jessiezcc Should we ship 0.1.31 in 0.7? Are there additional improvements that we would like to ship in 0.7? If so do we have an ETA for when they will land?

    priority/p0 kind/feature area/pipelines 
    opened by jlewi 39
  • [Feature] Supports parameterized S3Artifactory for Pipeline and ContainerOp in kfp package

    [Feature] Supports parameterized S3Artifactory for Pipeline and ContainerOp in kfp package

    Motivation

    I am running a kubeflow pipeline deployment with my custom helm chart and a minio s3 gateway to my custom bucket. This bucket has a different name from the default one in kfp, hence I need some way to parameterize the s3 artifact configs.

    Status

    • Waiting for Review

    Features

    • kfp can now declare a custom artifact location inside a pipeline or containerop.
    from kfp import dsl
    from kubernetes.client.models import V1SecretKeySelector
    
    
    @dsl.pipeline( name='foo', description='hello world')
    def foo_pipeline(namespace: str):
    
        # configures artifact location
        artifact_location = dsl.ArtifactLocation.s3(
                                bucket="foobar",
                                endpoint="minio-service.%s:9000" % namespace,  # parameterized namespace
                                insecure=True,
                                access_key_secret=V1SecretKeySelector(name="minio", key="accesskey"),
                                secret_key_secret={"name": "minio", "key": "secretkey"}  # accepts dict also
        )
    
        # set pipeline level artifact location
        conf = dsl.get_pipeline_conf().set_artifact_location(artifact_location)
        
        # use pipeline level artifact location (i.e. minio-service)
        op1 = dsl.ContainerOp(name='foo', image='bash:latest')
    
        # use containerop level artifact location (i.e. aws)
        op2 = dsl.ContainerOp(
                            name='foo', 
                            image='bash:latest',
                            # configures artifact location
                            artifact_location=dsl.ArtifactLocation.s3(
                                bucket="foobar",
                                endpoint="s3.amazonaws.com",
                                insecure=False,
                                access_key_secret=V1SecretKeySelector(name="s3-secret", key="accesskey"),
                                secret_key_secret=V1SecretKeySelector(name="s3-secret", key="secretkey"))
        )
    
    

    TLDR changes

    • argo-models is now a dependency in setup.py (argo v2.2.1)
    • Added static class ArtifactLocation
      • to help generate artifact location for s3
      • to help generate artifact for workflow templates
    • Updated PipelineConf to support artifact location
    • Updated k8s helper and related, to support openapi objects (I accidentally used openapi generator instead of swagger codegen for argo-models)
    • Added unit test for ArtifactLocation
    • Fixed unit test for kfp.aws (Found that it has a bug, and was not imported into the unit test)

    This change is Reviewable

    lgtm approved size/XL ok-to-test 
    opened by eterna2 39
  • chore(deps): bump json5 from 2.1.1 to 2.2.3 in /frontend/server

    chore(deps): bump json5 from 2.1.1 to 2.2.3 in /frontend/server

    Bumps json5 from 2.1.1 to 2.2.3.

    Release notes

    Sourced from json5's releases.

    v2.2.3

    v2.2.2

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).

    v2.2.1

    • Fix: Removed dependence on minimist to patch CVE-2021-44906. (#266)

    v2.2.0

    • New: Accurate and documented TypeScript declarations are now included. There is no need to install @types/json5. (#236, #244)

    v2.1.3 [code, diff]

    • Fix: An out of memory bug when parsing numbers has been fixed. (#228, #229)

    v2.1.2

    • Fix: Bump minimist to v1.2.5. (#222)
    Changelog

    Sourced from json5's changelog.

    v2.2.3 [code, diff]

    v2.2.2 [code, diff]

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).

    v2.2.1 [code, diff]

    • Fix: Removed dependence on minimist to patch CVE-2021-44906. (#266)

    v2.2.0 [code, diff]

    • New: Accurate and documented TypeScript declarations are now included. There is no need to install @types/json5. (#236, #244)

    v2.1.3 [code, diff]

    • Fix: An out of memory bug when parsing numbers has been fixed. (#228, #229)

    v2.1.2 [code, diff]

    • Fix: Bump minimist to v1.2.5. (#222)
    Commits
    • c3a7524 2.2.3
    • 94fd06d docs: update CHANGELOG for v2.2.3
    • 3b8cebf docs(security): use GitHub security advisories
    • f0fd9e1 docs: publish a security policy
    • 6a91a05 docs(template): bug -> bug report
    • 14f8cb1 2.2.2
    • 10cc7ca docs: update CHANGELOG for v2.2.2
    • 7774c10 fix: add proto to objects and arrays
    • edde30a Readme: slight tweak to intro
    • 97286f8 Improve example in readme
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    needs-ok-to-test size/S dependencies javascript 
    opened by dependabot[bot] 2
  • [feature] Add filter for finished_at runs

    [feature] Add filter for finished_at runs

    Feature Area

    /area sdk

    What feature would you like to see?

    Protocol buffer filtering for the finished_at field in runs.

    What is the use case or pain point?

    We are looking to archive runs that finished over 90 days ago. While we have a workaround (see below), if we have a filter (like you do currently for the created_at field), it would mean that our request payloads are substantially smaller.

    Is there a workaround currently?

    Our current approach is to get all non-archived runs (using a predicate filter), then iterate over the result to compare the datetime of the finished_at field to the datetime from 90 days ago. We then archive those runs that completed 90 days ago.


    Love this idea? Give it a 👍.

    area/sdk kind/feature 
    opened by knjk04 0
  • fix(components): update create_endpoint, delete_endpoint and deploy_model to use new remote_runner

    fix(components): update create_endpoint, delete_endpoint and deploy_model to use new remote_runner

    Issue The create_endpoint, delete_endpoint and deploy_model aiplatform components currently use the google_cloud_pipeline_components.container.v1.* scripts, which fail with the error below (example for create_endpoint)

    Error while finding module specification for 'google_cloud_pipeline_components.container.v1.endpoint.create_endpoint.launcher' (ModuleNotFoundError: No module named 'google_cloud_pipeline_components.container.v1')
    

    Description of your changes
    This PR changes the entrypoint of the create_endpoint, delete_endpoint and deploy_model components from python3 -u -m google_cloud_pipeline_components.container.v1.endpoint.create_endpoint.launcher to python3 -u -m google_cloud_pipeline_components.container.aiplatform.remote_runner --cls_name Endpoint --method_name create

    Adapted component inputs to comply with the imput requested by remote_runner:

    • Removed outdated parameters, as they are not supported by new versions of the Vertex AI Python SDK and can break execution.

    • Added new optional method parameters supported by the Vertex AI Python SDK.

    The deploy_model component now uses Model.deploy instead of Endpoint.deploy, thus the deploy_model folder has been moved from the endpoint folder to the model folder.

    Modified all relevant tests to comply with the new component definition.

    Checklist:

    needs-ok-to-test size/XL 
    opened by GiuliaMassimetti 2
  • feat(frontend): Support cloning recurringRun in KFP v2

    feat(frontend): Support cloning recurringRun in KFP v2

    1. Enable cloning a recurring run with same runtimeConfig and run-trigger from the original recurring run.
    2. Enable user change runtimeConfig and run-trigger from cloned recurring run.
    size/L 
    opened by jlyaoyuli 2
  • feat(frontend): Implement  aws-js-sdk crendentials to support IRSA for s3

    feat(frontend): Implement aws-js-sdk crendentials to support IRSA for s3

    Description of your changes:

    #8502 details most of the changes and reasoning in a design document.

    Removing most of the aws-helper code that pertains to grabbing the credentials from the ec2 instance metadata as this code is already handled as part of the credentialProviderChain that is being imported here.

    Checklist:

    size/XXL 
    opened by ryansteakley 4
  • [sdk] No longer possible to compile components that use PipelineTaskFinalStatus

    [sdk] No longer possible to compile components that use PipelineTaskFinalStatus

    Environment

    • KFP version: N/A
    • KFP SDK version: 2.0.0b10
    • All dependencies version:
    kfp                      2.0.0b10
    kfp-pipeline-spec        0.1.17
    kfp-server-api           2.0.0a6
    

    Steps to reproduce

    In version 1.x.x it was possible to compile components that uses PipelineTaskFinalStatus, e.g using:

    from kfp.v2.dsl import component, PipelineTaskFinalStatus, pipeline, ExitHandler
    from kfp.components import load_component_from_file
    
    
    @component(output_component_file='example.yaml')
    def example(status: PipelineTaskFinalStatus):
        print(status)
    
    
    loaded_component = load_component_from_file('example.yaml')
    
    @pipeline
    def example_pipeline():
        with ExitHandler(loaded_component()):
            pass
    

    In version 2.0.0bx we expected this to still be possible, e.g with

    from kfp.dsl import component, PipelineTaskFinalStatus, pipeline, ExitHandler
    from kfp.compiler import Compiler
    from kfp.components import load_component_from_file
    
    
    @component
    def example(status: PipelineTaskFinalStatus):
        print(status)
    
    
    Compiler().compile(example, 'example.yaml')
    
    loaded_component = load_component_from_file('example.yaml')
    
    @pipeline
    def example_pipeline():
        with ExitHandler(loaded_component()):
            pass
    

    However, this is no longer possible for what seems like a variety of reasons. First of all the sdk explicitly forbids it in

    https://github.com/kubeflow/pipelines/blob/fdf3ee7b68b2293d08e14c389b0dab9a57854e2a/sdk/python/kfp/compiler/pipeline_spec_builder.py#L333-L338

    I tried commenting out the above check, which does enable us to compile valid component yaml without further changes:

    # PIPELINE DEFINITION
    # Name: example
    # Inputs:
    #    status: dict
    components:
      comp-example:
        executorLabel: exec-example
        inputDefinitions:
          parameters:
            status:
              isOptional: true
              parameterType: STRUCT
    deploymentSpec:
      executors:
        exec-example:
          container:
            args:
            - --executor_input
            - '{{$}}'
            - --function_to_execute
            - example
            command:
            - sh
            - -c
            - "\nif ! [ -x \"$(command -v pip)\" ]; then\n    python3 -m ensurepip ||\
              \ python3 -m ensurepip --user || apt-get install python3-pip\nfi\n\nPIP_DISABLE_PIP_VERSION_CHECK=1\
              \ python3 -m pip install --quiet     --no-warn-script-location 'kfp==2.0.0-beta.10'\
              \ && \"$0\" \"$@\"\n"
            - sh
            - -ec
            - 'program_path=$(mktemp -d)
    
              printf "%s" "$0" > "$program_path/ephemeral_component.py"
    
              python3 -m kfp.components.executor_main                         --component_module_path                         "$program_path/ephemeral_component.py"                         "$@"
    
              '
            - "\nimport kfp\nfrom kfp import dsl\nfrom kfp.dsl import *\nfrom typing import\
              \ *\n\ndef example(status: PipelineTaskFinalStatus):\n    print(status)\n\
              \n"
            image: python:3.7
    pipelineInfo:
      name: example
    root:
      dag:
        tasks:
          example:
            cachingOptions:
              enableCache: true
            componentRef:
              name: comp-example
            inputs:
              parameters:
                status:
                  componentInputParameter: status
            taskInfo:
              name: example
      inputDefinitions:
        parameters:
          status:
            isOptional: true
            parameterType: STRUCT
    schemaVersion: 2.1.0
    sdkVersion: kfp-2.0.0-beta.10
    

    Unfortunately, the input type of the status parameter is converted to STRUCT in the process, so loading in the component yaml results in a component that fails at runtime because the status argument is not provided by the backend as expected.

    Expected result

    It should be possible to compile components that use PipelineTaskFinalStatus for feature parity with version 1.x.x

    Materials and Reference


    Impacted by this bug? Give it a 👍.

    kind/bug area/sdk 
    opened by suned 0
Releases(2.0.0b10)
Owner
Kubeflow
Kubeflow is an open, community driven project to make it easy to deploy and manage an ML stack on Kubernetes
Kubeflow
Implementation of linesearch Optimization Algorithms in Python

Nonlinear Optimization Algorithms During my time as Scientific Assistant at the Karlsruhe Institute of Technology (Germany) I implemented various Opti

Paul 3 Dec 06, 2022
PyCaret is an open-source, low-code machine learning library in Python that automates machine learning workflows.

An open-source, low-code machine learning library in Python 🚀 Version 2.3.5 out now! Check out the release notes here. Official • Docs • Install • Tu

PyCaret 6.7k Jan 08, 2023
Simple structured learning framework for python

PyStruct PyStruct aims at being an easy-to-use structured learning and prediction library. Currently it implements only max-margin methods and a perce

pystruct 666 Jan 03, 2023
A collection of interactive machine-learning experiments: 🏋️models training + 🎨models demo

🤖 Interactive Machine Learning experiments: 🏋️models training + 🎨models demo

Oleksii Trekhleb 1.4k Jan 06, 2023
Kats is a toolkit to analyze time series data, a lightweight, easy-to-use, and generalizable framework to perform time series analysis.

Kats, a kit to analyze time series data, a lightweight, easy-to-use, generalizable, and extendable framework to perform time series analysis, from understanding the key statistics and characteristics

Facebook Research 4.1k Dec 29, 2022
BudouX is the successor to Budou, the machine learning powered line break organizer tool.

BudouX Standalone. Small. Language-neutral. BudouX is the successor to Budou, the machine learning powered line break organizer tool. It is standalone

Google 868 Jan 05, 2023
Data from "Datamodels: Predicting Predictions with Training Data"

Data from "Datamodels: Predicting Predictions with Training Data" Here we provid

Madry Lab 51 Dec 09, 2022
Iris-Heroku - Putting a Machine Learning Model into Production with Flask and Heroku

Puesta en Producción de un modelo de aprendizaje automático con Flask y Heroku L

Jesùs Guillen 1 Jun 03, 2022
A project based example of Data pipelines, ML workflow management, API endpoints and Monitoring.

MLOps template with examples for Data pipelines, ML workflow management, API development and Monitoring.

Utsav 33 Dec 03, 2022
A high-performance topological machine learning toolbox in Python

giotto-tda is a high-performance topological machine learning toolbox in Python built on top of scikit-learn and is distributed under the G

giotto.ai 632 Dec 29, 2022
SIMD-accelerated bitwise hamming distance Python module for hexidecimal strings

hexhamming What does it do? This module performs a fast bitwise hamming distance of two hexadecimal strings. This looks like: DEADBEEF = 1101111010101

Michael Recachinas 12 Oct 14, 2022
Learn Machine Learning Algorithms by doing projects in Python and R Programming Language

Learn Machine Learning Algorithms by doing projects in Python and R Programming Language. This repo covers all aspect of Machine Learning Algorithms.

Ravi Chaubey 6 Oct 20, 2022
A simple python program that draws a tree for incrementing values using the Collatz Conjecture.

Collatz Conjecture A simple python program that draws a tree for incrementing values using the Collatz Conjecture. Values which can be edited: Length

davidgasinski 1 Oct 28, 2021
Repositório para o #alurachallengedatascience1

1° Challenge de Dados - Alura A Alura Voz é uma empresa de telecomunicação que nos contratou para atuar como cientistas de dados na equipe de vendas.

Sthe Monica 16 Nov 10, 2022
monolish: MONOlithic Liner equation Solvers for Highly-parallel architecture

monolish is a linear equation solver library that monolithically fuses variable data type, matrix structures, matrix data format, vendor specific data transfer APIs, and vendor specific numerical alg

RICOS Co. Ltd. 179 Dec 21, 2022
Python/Sage Tool for deriving Scattering Matrices for WDF R-Adaptors

R-Solver A Python tools for deriving R-Type adaptors for Wave Digital Filters. This code is not quite production-ready. If you are interested in contr

8 Sep 19, 2022
This project impelemented for midterm of the Machine Learning #Zoomcamp #Alexey Grigorev

MLProject_01 This project impelemented for midterm of the Machine Learning #Zoomcamp #Alexey Grigorev Context Dataset English question data set file F

Hadi Nakhi 1 Dec 18, 2021
Pyomo is an object-oriented algebraic modeling language in Python for structured optimization problems.

Pyomo is a Python-based open-source software package that supports a diverse set of optimization capabilities for formulating and analyzing optimization models. Pyomo can be used to define symbolic p

Pyomo 1.4k Dec 28, 2022
Skforecast is a python library that eases using scikit-learn regressors as multi-step forecasters

Skforecast is a python library that eases using scikit-learn regressors as multi-step forecasters. It also works with any regressor compatible with the scikit-learn API (pipelines, CatBoost, LightGBM

Joaquín Amat Rodrigo 297 Jan 09, 2023
Compare MLOps Platforms. Breakdowns of SageMaker, VertexAI, AzureML, Dataiku, Databricks, h2o, kubeflow, mlflow...

Compare MLOps Platforms. Breakdowns of SageMaker, VertexAI, AzureML, Dataiku, Databricks, h2o, kubeflow, mlflow...

Thoughtworks 318 Jan 02, 2023