A Powerful Serverless Analysis Toolkit That Takes Trial And Error Out of Machine Learning Projects

Overview


KXY: A Seemless API to 10x The Productivity of Machine Learning Engineers

License PyPI Latest Release Downloads

Documentation

https://www.kxy.ai/reference/

Installation

From PyPi:

pip install kxy

From GitHub:

git clone https://github.com/kxytechnologies/kxy-python.git & cd ./kxy-python & pip install .

Authentication

All heavy-duty computations are run on our serverless infrastructure and require an API key. To configure the package with your API key, run

kxy configure

and follow the instructions. To get an API key you need an account; you can sign up for a free trial here. You'll then be automatically given an API key which you can find here.

KXY is free for academic use.

Docker

The Docker image kxytechnologies/kxy has been built for your convenience, and comes with anaconda, auto-sklearn, and the kxy package.

To start a Jupyter Notebook server from a sandboxed Docker environment, run

&& /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser --allow-root --NotebookApp.token=''" ">
docker run -i -t -p 5555:8888 kxytechnologies/kxy:latest /bin/bash -c "kxy configure 
   
     && /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser --allow-root --NotebookApp.token=''
    "
   

where you should replace with your API key and navigate to http://localhost:5555 in your browser. This docker environment comes with all examples available on the documentation website.

To start a Jupyter Notebook server from an existing directory of notebooks, run

&& /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser --allow-root --NotebookApp.token=''" ">
docker run -i -t --mount src=</path/to/your/local/dir>,target=/opt/notebooks,type=bind -p 5555:8888 kxytechnologies/kxy:latest /bin/bash -c "kxy configure 
   
     && /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser --allow-root --NotebookApp.token=''
    "
   

where you should replace with the path to your local notebook folder and navigate to http://localhost:5555 in your browser.

Other Programming Language

We plan to release friendly API client in more programming language.

In the meantime, you can directly issue requests to our RESTFul API using your favorite programming language.

You might also like...
Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable.

SDK: Overview of the Kubeflow pipelines service Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on

Model Validation Toolkit is a collection of tools to assist with validating machine learning models prior to deploying them to production and monitoring them after deployment to production.

Model Validation Toolkit is a collection of tools to assist with validating machine learning models prior to deploying them to production and monitoring them after deployment to production.

A machine learning toolkit dedicated to time-series data

tslearn The machine learning toolkit for time series analysis in Python Section Description Installation Installing the dependencies and tslearn Getti

A machine learning toolkit dedicated to time-series data

tslearn The machine learning toolkit for time series analysis in Python Section Description Installation Installing the dependencies and tslearn Getti

Kats is a toolkit to analyze time series data, a lightweight, easy-to-use, and generalizable framework to perform time series analysis.
Kats is a toolkit to analyze time series data, a lightweight, easy-to-use, and generalizable framework to perform time series analysis.

Kats, a kit to analyze time series data, a lightweight, easy-to-use, generalizable, and extendable framework to perform time series analysis, from understanding the key statistics and characteristics, detecting change points and anomalies, to forecasting future trends.

A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.
A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

A library of extension and helper modules for Python's data analysis and machine learning libraries.
A library of extension and helper modules for Python's data analysis and machine learning libraries.

Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks. Sebastian Raschka 2014-2021 Links Doc

A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Python Extreme Learning Machine (ELM) Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Comments
  • error in import kxy

    error in import kxy

    Hi, After installing the kxy package and configuring the API key, the import kxy shows the error below:

    .../python3.9/site-packages/kxy/pfs/pfs_selector.py in <module>
          6 import numpy as np
          7 
    ----> 8 import tensorflow as tf
          9 from tensorflow.keras.callbacks import EarlyStopping, TerminateOnNaN
         10 from tensorflow.keras.optimizers import Adam
    
    ModuleNotFoundError: No module named 'tensorflow'
    
    

    what version of tensorflow is needed for kxy to work?

    opened by zeydabadi 2
  • generate_features Documentation?

    generate_features Documentation?

    Is there any documentation on how to use the generate_features function? It doesn't appear in the documentation and I can't find it in the github. e.g. how to use the entity column, how to format time-series data in advance for it, etc'. Thanks!

    opened by ddofer 1
  • error kxy.data_valuation

    error kxy.data_valuation

    Hi, After running chievable_performance_df = X_train_reduced.kxy.data_valuation(target_column='state', problem_type='classification', include_mutual_information=True, anonymize=True) I get the following error and the function does not return anything: `During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/lib/python3.9/asyncio/tasks.py", line 258, in __step result = coro.throw(exc) File "/home/lucy/Downloads/general/lib/python3.9/site-packages/tornado/websocket.py", line 1104, in wrapper raise WebSocketClosedError() tornado.websocket.WebSocketClosedError Task exception was never retrieved future: <Task finished name='Task-46004' coro=<WebSocketProtocol13.write_message..wrapper() done, defined at /home/lucy/Downloads/general/lib/python3.9/site-packages/tornado/websocket.py:1100> exception=WebSocketClosedError()> Traceback (most recent call last): File "/home/lucy/Downloads/general/lib/python3.9/site-packages/tornado/websocket.py", line 1102, in wrapper await fut File "/usr/lib/python3.9/asyncio/tasks.py", line 328, in __wakeup future.result() tornado.iostream.StreamClosedError: Stream is closed `

    opened by zeydabadi 0
Releases(v1.4.10)
  • v1.4.10(Apr 25, 2022)

    Change Log

    v.1.4.10 Changes

    • Added a function to construct features derived from PFS mutual information estimation that should be expected to be linearly related to the target.
    • Fixed a global name conflict in kxy.learning.base_learners.

    v.1.4.9 Changes

    • Change the activation function used by PFS from ReLU to switch/SILU.
    • Leaving it to the user to set the logging level.

    v.1.4.8 Changes

    • Froze the versions of all python packages in the docker file.

    v.1.4.7 Changes

    Changes related to optimizing Principal Feature Selection.

    • Made it easy to change PFS' default learning parameters.
    • Changed PFS' default learning parameters (learning rate is now 0.005 and epsilon 1e-04)
    • Adding a seed parameter to PFS' fit for reproducibility.

    To globally change the learning rate to 0.003, change Adam's epsilon to 1e-5, and the number of epochs to 25, do

    from kxy.misc.tf import set_default_parameter
    set_default_parameter('lr', 0.003)
    set_default_parameter('epsilon', 1e-5)
    set_default_parameter('epochs', 25)
    

    To change the number epochs for a single iteration of PFS, use the epochs argument of the fit method of your PFS object. The fit method now also has a seed parameter you may use to make the PFS implementation deterministic.

    Example:

    from kxy.pfs import PFS
    selector = PFS()
    selector.fit(x, y, epochs=25, seed=123)
    

    Alternatively, you may also use the kxy.misc.tf.set_seed method to make PFS deterministic.

    v.1.4.6 Changes

    Minor PFS improvements.

    • Adding more (robust) mutual information loss functions.
    • Exposing the learned total mutual information between principal features and target as an attribute of PFS.
    • Exposing the number of epochs as a parameter of PFS' fit.
    Source code(tar.gz)
    Source code(zip)
  • v1.4.9(Apr 12, 2022)

    Change Log

    v.1.4.9 Changes

    • Change the activation function used by PFS from ReLU to switch/SILU.
    • Leaving it to the user to set the logging level.

    v.1.4.8 Changes

    • Froze the versions of all python packages in the docker file.

    v.1.4.7 Changes

    Changes related to optimizing Principal Feature Selection.

    • Made it easy to change PFS' default learning parameters.
    • Changed PFS' default learning parameters (learning rate is now 0.005 and epsilon 1e-04)
    • Adding a seed parameter to PFS' fit for reproducibility.

    To globally change the learning rate to 0.003, change Adam's epsilon to 1e-5, and the number of epochs to 25, do

    from kxy.misc.tf import set_default_parameter
    set_default_parameter('lr', 0.003)
    set_default_parameter('epsilon', 1e-5)
    set_default_parameter('epochs', 25)
    

    To change the number epochs for a single iteration of PFS, use the epochs argument of the fit method of your PFS object. The fit method now also has a seed parameter you may use to make the PFS implementation deterministic.

    Example:

    from kxy.pfs import PFS
    selector = PFS()
    selector.fit(x, y, epochs=25, seed=123)
    

    Alternatively, you may also use the kxy.misc.tf.set_seed method to make PFS deterministic.

    v.1.4.6 Changes

    Minor PFS improvements.

    • Adding more (robust) mutual information loss functions.
    • Exposing the learned total mutual information between principal features and target as an attribute of PFS.
    • Exposing the number of epochs as a parameter of PFS' fit.
    Source code(tar.gz)
    Source code(zip)
  • v1.4.8(Apr 11, 2022)

    Change Log

    v.1.4.8 Changes

    • Froze the versions of all python packages in the docker file.

    v.1.4.7 Changes

    Changes related to optimizing Principal Feature Selection.

    • Made it easy to change PFS' default learning parameters.
    • Changed PFS' default learning parameters (learning rate is now 0.005 and epsilon 1e-04)
    • Adding a seed parameter to PFS' fit for reproducibility.

    To globally change the learning rate to 0.003, change Adam's epsilon to 1e-5, and the number of epochs to 25, do

    from kxy.misc.tf import set_default_parameter
    set_default_parameter('lr', 0.003)
    set_default_parameter('epsilon', 1e-5)
    set_default_parameter('epochs', 25)
    

    To change the number epochs for a single iteration of PFS, use the epochs argument of the fit method of your PFS object. The fit method now also has a seed parameter you may use to make the PFS implementation deterministic.

    Example:

    from kxy.pfs import PFS
    selector = PFS()
    selector.fit(x, y, epochs=25, seed=123)
    

    Alternatively, you may also use the kxy.misc.tf.set_seed method to make PFS deterministic.

    v.1.4.6 Changes

    Minor PFS improvements.

    • Adding more (robust) mutual information loss functions.
    • Exposing the learned total mutual information between principal features and target as an attribute of PFS.
    • Exposing the number of epochs as a parameter of PFS' fit.
    Source code(tar.gz)
    Source code(zip)
  • v1.4.7(Apr 10, 2022)

    Change Log

    v.1.4.7 Changes

    Changes related to optimizing Principal Feature Selection.

    • Made it easy to change PFS' default learning parameters.
    • Changed PFS' default learning parameters (learning rate is now 0.005 and epsilon 1e-04)
    • Adding a seed parameter to PFS' fit for reproducibility.

    To globally change the learning rate to 0.003, change Adam's epsilon to 1e-5, and the number of epochs to 25, do

    from kxy.misc.tf import set_default_parameter
    set_default_parameter('lr', 0.003)
    set_default_parameter('epsilon', 1e-5)
    set_default_parameter('epochs', 25)
    

    To change the number epochs for a single iteration of PFS, use the epochs argument of the fit method of your PFS object. The fit method now also has a seed parameter you may use to make the PFS implementation deterministic.

    Example:

    from kxy.pfs import PFS
    selector = PFS()
    selector.fit(x, y, epochs=25, seed=123)
    

    Alternatively, you may also use the kxy.misc.tf.set_seed method to make PFS deterministic.

    v.1.4.6 Changes

    Minor PFS improvements.

    • Adding more (robust) mutual information loss functions.
    • Exposing the learned total mutual information between principal features and target as an attribute of PFS.
    • Exposing the number of epochs as a parameter of PFS' fit.
    Source code(tar.gz)
    Source code(zip)
  • v1.4.6(Apr 10, 2022)

    Changes

    • Adding more (robust) mutual information loss functions.
    • Exposing the learned total mutual information between principal features and target as an attribute of PFS.
    • Exposing the number of epochs as a parameter of PFS' fit.
    Source code(tar.gz)
    Source code(zip)
  • v1.4.5(Apr 9, 2022)

  • v1.4.4(Apr 8, 2022)

  • v0.3.2(Aug 14, 2020)

  • v0.3.0(Aug 3, 2020)

    Adding a maximum-entropy based classifier (kxy.MaxEntClassifier) and regressor (kxy.MaxEntRegressor) following the scikit-learn signature for fitting and predicting.

    These models estimate the posterior mean E[u_y|x] and the posterior standard deviation sqrt(Var[u_y|x]) for any specific value of x, where the copula-uniform representations (u_y, u_x) follow the maximum-entropy distribution.

    Predictions in the primal are derived from E[u_y|x].

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jun 25, 2020)

    • Regression analyses now fully support categorical variables.
    • Foundations for multi-output regressions are laid.
    • Categorical variables are now systematically encoded and treated as continuous, consistent with what's done at the learning stage.
    • Regression and classification are further normalized, and most the compute for classification problems now takes place on the API side, and should be considerably faster.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.18(May 26, 2020)

  • v0.0.16(May 18, 2020)

  • v0.0.15(May 18, 2020)

  • v0.0.14(May 18, 2020)

  • v0.0.13(May 16, 2020)

  • v0.0.11(May 13, 2020)

  • v0.0.10(May 11, 2020)

Owner
KXY Technologies, Inc.
KXY Technologies, Inc.
The Emergence of Individuality

The Emergence of Individuality

16 Jul 20, 2022
GroundSeg Clustering Optimized Kdtree

ground seg and clustering based on kitti velodyne data, and a additional optimized kdtree for knn and radius nn search

2 Dec 02, 2021
Management of exclusive GPU access for distributed machine learning workloads

TensorHive is an open source tool for managing computing resources used by multiple users across distributed hosts. It focuses on granting

Paweł Rościszewski 131 Dec 12, 2022
AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications.

AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy m

Robin 55 Dec 27, 2022
CorrProxies - Optimizing Machine Learning Inference Queries with Correlative Proxy Models

CorrProxies - Optimizing Machine Learning Inference Queries with Correlative Proxy Models

ZhihuiYangCS 8 Jun 07, 2022
customer churn prediction prevention in telecom industry using machine learning and survival analysis

Telco Customer Churn Prediction - Plotly Dash Application Description This dash application allows you to predict telco customer churn using machine l

Benaissa Mohamed Fayçal 3 Nov 20, 2021
Causal Inference and Machine Learning in Practice with EconML and CausalML: Industrial Use Cases at Microsoft, TripAdvisor, Uber

Causal Inference and Machine Learning in Practice with EconML and CausalML: Industrial Use Cases at Microsoft, TripAdvisor, Uber

EconML/CausalML KDD 2021 Tutorial 124 Dec 28, 2022
Predico Disease Prediction system based on symptoms provided by patient- using Python-Django & Machine Learning

Predico Disease Prediction system based on symptoms provided by patient- using Python-Django & Machine Learning

Felix Daudi 1 Jan 06, 2022
Python factor analysis library (PCA, CA, MCA, MFA, FAMD)

Prince is a library for doing factor analysis. This includes a variety of methods including principal component analysis (PCA) and correspondence anal

Max Halford 915 Dec 31, 2022
BigDL: Distributed Deep Learning Framework for Apache Spark

BigDL: Distributed Deep Learning on Apache Spark What is BigDL? BigDL is a distributed deep learning library for Apache Spark; with BigDL, users can w

4.1k Jan 09, 2023
PLUR is a collection of source code datasets suitable for graph-based machine learning.

PLUR (Programming-Language Understanding and Repair) is a collection of source code datasets suitable for graph-based machine learning. We provide scripts for downloading, processing, and loading the

Google Research 76 Nov 25, 2022
Neighbourhood Retrieval (Nearest Neighbours) with Distance Correlation.

Neighbourhood Retrieval with Distance Correlation Assign Pseudo class labels to datapoints in the latent space. NNDC is a slim wrapper around FAISS. N

The Learning Machines 1 Jan 16, 2022
Learn Machine Learning Algorithms by doing projects in Python and R Programming Language

Learn Machine Learning Algorithms by doing projects in Python and R Programming Language. This repo covers all aspect of Machine Learning Algorithms.

Ravi Chaubey 6 Oct 20, 2022
MLR - Machine Learning Research

Machine Learning Research 1. Project Topic 1.1. Exsiting research Benmark: https://paperswithcode.com/sota ACL anthology for NLP papers: http://www.ac

Charles 69 Oct 20, 2022
Open MLOps - A Production-focused Open-Source Machine Learning Framework

Open MLOps - A Production-focused Open-Source Machine Learning Framework Open MLOps is a set of open-source tools carefully chosen to ease user experi

Data Revenue 590 Dec 28, 2022
Warren - Stock Price Predictor

Web app to predict closing stock prices in real time using Facebook's Prophet time series algorithm with a multi-variate, single-step time series forecasting strategy.

Kumar Nityan Suman 153 Jan 03, 2023
STUMPY is a powerful and scalable Python library for computing a Matrix Profile, which can be used for a variety of time series data mining tasks

STUMPY STUMPY is a powerful and scalable library that efficiently computes something called the matrix profile, which can be used for a variety of tim

TD Ameritrade 2.5k Jan 06, 2023
Bayesian Additive Regression Trees For Python

BartPy Introduction BartPy is a pure python implementation of the Bayesian additive regressions trees model of Chipman et al [1]. Reasons to use BART

187 Dec 16, 2022
Primitives for machine learning and data science.

An Open Source Project from the Data to AI Lab, at MIT MLPrimitives Pipelines and primitives for machine learning and data science. Documentation: htt

MLBazaar 65 Dec 29, 2022
PyTorch extensions for high performance and large scale training.

Description FairScale is a PyTorch extension library for high performance and large scale training on one or multiple machines/nodes. This library ext

Facebook Research 2k Dec 28, 2022