Build, test, deploy, iterate - Dev and prod tool for data science pipelines

Overview

Prodmodel

Prodmodel is a build system for data science pipelines. Users, testers, contributors are welcome!

Motivation · Concepts · Installation · Usage · Contributing · Contact · Licence

Motivation

  • Performance. No need to rerun things, everything is cached, switching between multiple versions is super easy. Prodmodel can figure out if a particular partial code path has already been executed using a particular piece of data and just use the cached output.
  • Easy debugging. Every single dependency - code or data - is version controlled and tracked.
  • Deploy to production. Models are more than just a file. Prodmodel makes sure that the correct version of label encoders, feature transformation code and data and model files are all packaged together.

Concepts

A build system is a DAG of rules (transformations), inputs and targets. In Prodmodel inputs can be

  • data,
  • Python code,
  • and configuration.

A rule is transforming any of the above to an output (which can in turn be depended on by other rules). Therefore rules need to be re-executed (and their outputs re-created) if any of their dependencies change. Prodmodel keeps track all of these dependencies.

The outputs of the rules are targets. Every target corresponds to an output (e.g. a model or a dataset). These outputs are cached and version controlled.

Prodmodel therefore ensures

  • correctness, by executing every code (e.g. feature transformation, model building, tests) which can potentially be affected by a change, and
  • performance, by executing only the necessary code, saving time compared to rerunning the whole pipeline.

Rules

Every rule is a statically typed function, where the inputs are targets, data, or configs. The execution of a rule outputs some data (e.g. a different feature set or a model), which can be used in other rules.

In order to use Prodmodel your code has to be structured as functions which the rules can call into.

Targets

Targets are created by rule functions. Targets can be executed to generate output files. IterableDataTarget is a special target which can be used as an iterable of dicts to make iterating over datasets easier. Regular DataTargets can represent any Python object.

Installation

Prodmodel requires at least Python3.6. Use pip to install prodmodel.

pip install prodmodel --user

Usage

Create a build.py file in your data science folder. The build file contains references to your inputs and the build rules you can execute.

from prodmodel.rules import rules

csv_data = rules.data_source(file='data.csv', type='csv', dtypes={...})

my_model = rules.transform(objects={'data': csv_data}, file='kmeans.py', fn='compute_kmeans')

Now you can build your model by running prodmodel my_model from the directory of build.py, or prodmodel <path_to_my_directory>:my_model from any directory.

Prodmodel creates a .prodmodel directory under the home directory of the user to store log and config files.

Documentation

Check out a complete example project for more examples.

The complete list of build rules can be found here.

Prodmodel searches for a config file under <user home dir>/.prodmodel/config. The config file can be created manually based on this template.

Arguments

  • --force_external: Some data sources are remote (e.g. an SQL server), therefore tracking changes is not always feasible. This argument gives the user manual control over when to reload these data sources.
  • --cache_data: Cache local data files if changed. This can be useful for debugging / reproducibility by making sure every data source used for a specific build is saved.
  • --output_format: One of none, str, bytes and log. The output format of the data produced by the build target written to stdout.

List targets in build file

  • Run prodmodel ls <path_to_build> to list targets in a build file where <path_to_build> to the build file or its directory.

Cleaning old cache files

  • Run prodmodel clean <target> --cutoff_date=<cutoff datetime> to delete output cache files of a target created before the cutoff datetime, which has to be in %Y-%m-%dT%H:%M%S (YYYY-mm-ddTHH:MM:SS) format.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Contact

Feel free to email me at [email protected] if you have any question, need help or would like to contribute to the code.

Licence

Apache 2.0

You might also like...
Link-tree - Script that iterate over the links found in each page

link-tree Script that iterate over the links found in each page, recursively fin

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code. Tuplex has similar Python APIs to Apache Spark or Dask, but rather than invoking the Python interpreter, Tuplex generates optimized LLVM bytecode for the given pipeline and input data set.

This solution helps you deploy Data Lake Infrastructure on AWS using CDK Pipelines.
This solution helps you deploy Data Lake Infrastructure on AWS using CDK Pipelines.

CDK Pipelines for Data Lake Infrastructure Deployment This solution helps you deploy data lake infrastructure on AWS using CDK Pipelines. This is base

This reporistory contains the test-dev data of the paper "xGQA: Cross-lingual Visual Question Answering".

This reporistory contains the test-dev data of the paper "xGQA: Cross-lingual Visual Question Answering".

Get a Django app up and running in dev, test, and production with best practices in 10 minutes

Django template for Docker + Heroku This is how I set up Django projects to get up and running as quick as possible. In includes a few neat things: De

Write maintainable, production-ready pipelines using Jupyter or your favorite text editor. Develop locally, deploy to the cloud. ☁️
Write maintainable, production-ready pipelines using Jupyter or your favorite text editor. Develop locally, deploy to the cloud. ☁️

Write maintainable, production-ready pipelines using Jupyter or your favorite text editor. Develop locally, deploy to the cloud. ☁️

simple way to build the declarative and destributed data pipelines with python

unipipeline simple way to build the declarative and distributed data pipelines. Why you should use it Declarative strict config Scaffolding Fully type

:package: :fire: Python project management. Manage packages: convert between formats, lock, install, resolve, isolate, test, build graph, show outdated, audit. Manage venvs, build package, bump version.
:package: :fire: Python project management. Manage packages: convert between formats, lock, install, resolve, isolate, test, build graph, show outdated, audit. Manage venvs, build package, bump version.

THE PROJECT IS ARCHIVED Forks: https://github.com/orsinium/forks DepHell -- project management for Python. Why it is better than all other tools: Form

Streamz helps you build pipelines to manage continuous streams of data

Streamz helps you build pipelines to manage continuous streams of data. It is simple to use in simple cases, but also supports complex pipelines that involve branching, joining, flow control, feedback, back pressure, and so on.

Apache Liminal is an end-to-end platform for data engineers & scientists, allowing them to build, train and deploy machine learning models in a robust and agile way
Apache Liminal is an end-to-end platform for data engineers & scientists, allowing them to build, train and deploy machine learning models in a robust and agile way

Apache Liminals goal is to operationalise the machine learning process, allowing data scientists to quickly transition from a successful experiment to an automated pipeline of model training, validation, deployment and inference in production. Liminal provides a Domain Specific Language to build ML workflows on top of Apache Airflow.

Ubuntu env build; Nginx build; DB build;

Deploy 介绍 Deploy related scripts bitnami Dependencies Ubuntu openssl envsubst docker v18.06.3 docker-compose init base env upload https://gitlab-runn

Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

Luigi is a Python (3.6, 3.7 tested) package that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow managemen

Build tensorflow keras model pipelines in a single line of code. Created by Ram Seshadri. Collaborators welcome. Permission granted upon request.
Build tensorflow keras model pipelines in a single line of code. Created by Ram Seshadri. Collaborators welcome. Permission granted upon request.

deep_autoviml Build keras pipelines and models in a single line of code! Table of Contents Motivation How it works Technology Install Usage API Image

Zappa makes it super easy to build and deploy server-less, event-driven Python applications on AWS Lambda + API Gateway.
Zappa makes it super easy to build and deploy server-less, event-driven Python applications on AWS Lambda + API Gateway.

Zappa makes it super easy to build and deploy server-less, event-driven Python applications (including, but not limited to, WSGI web apps) on AWS Lambda + API Gateway. Think of it as "serverless" web hosting for your Python apps. That means infinite scaling, zero downtime, zero maintenance - and at a fraction of the cost of your current deployments!

Build, deploy and extract satellite public constellations with one command line.
Build, deploy and extract satellite public constellations with one command line.

SatExtractor Build, deploy and extract satellite public constellations with one command line. Table of Contents About The Project Getting Started Stru

#30DaysOfStreamlit is a 30-day social challenge for you to build and deploy Streamlit apps.
#30DaysOfStreamlit is a 30-day social challenge for you to build and deploy Streamlit apps.

30 Days Of Streamlit 🎈 This is the official repo of #30DaysOfStreamlit — a 30-day social challenge for you to learn, build and deploy Streamlit apps.

This is a tool to develop, build and test PHP extensions in Docker containers.

Develop, Build and Test PHP Extensions This is a tool to develop, build and test PHP extensions in Docker containers. Installation Clone this reposito

This tool parses log data and allows to define analysis pipelines for anomaly detection.
This tool parses log data and allows to define analysis pipelines for anomaly detection.

logdata-anomaly-miner This tool parses log data and allows to define analysis pipelines for anomaly detection. It was designed to run the analysis wit

Pynguin, The PYthoN General UnIt Test geNerator is a test-generation tool for Python
Pynguin, The PYthoN General UnIt Test geNerator is a test-generation tool for Python

Pynguin, the PYthoN General UnIt test geNerator, is a tool that allows developers to generate unit tests automatically.

Comments
  • Bump numpy from 1.16.4 to 1.22.0

    Bump numpy from 1.16.4 to 1.22.0

    Bumps numpy from 1.16.4 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(v0.4.3)
Owner
Prodmodel
Prodmodel
Directions overlay for working with pandas in an analysis environment

dovpanda Directions OVer PANDAs Directions are hints and tips for using pandas in an analysis environment. dovpanda is an overlay companion for workin

dovpandev 431 Dec 20, 2022
Easy pipelines for pandas DataFrames.

pdpipe ˨ Easy pipelines for pandas DataFrames (learn how!). Website: https://pdpipe.github.io/pdpipe/ Documentation: https://pdpipe.github.io/pdpipe/d

694 Jan 05, 2023
Build, test, deploy, iterate - Dev and prod tool for data science pipelines

Prodmodel is a build system for data science pipelines. Users, testers, contributors are welcome! Motivation · Concepts · Installation · Usage · Contr

Prodmodel 53 Nov 29, 2022
Tools for parsing messy tabular data.

Parsing for messy tables A library for dealing with messy tabular data in several formats, guessing types and detecting headers. See the documentation

Open Knowledge Foundation 382 Nov 10, 2022
Pandas integration with sklearn

Sklearn-pandas This module provides a bridge between Scikit-Learn's machine learning methods and pandas-style Data Frames. In particular, it provides

2.7k Dec 27, 2022
A Python toolkit for processing tabular data

meza: A Python toolkit for processing tabular data Index Introduction | Requirements | Motivation | Hello World | Usage | Interoperability | Installat

Reuben Cummings 401 Dec 19, 2022
functional data manipulation for pandas

pandas-ply: functional data manipulation for pandas pandas-ply is a thin layer which makes it easier to manipulate data with pandas. In particular, it

Coursera 188 Nov 24, 2022
Microsoft Azure provides a wide number of services for managing and storing data

Microsoft Azure provides a wide number of services for managing and storing data. One product is Microsoft Azure SQL. Which gives us the capability to create and manage instances of SQL Servers hoste

Riya Vijay Vishwakarma 1 Dec 12, 2021
BatchFlow helps you conveniently work with random or sequential batches of your data and define data processing and machine learning workflows even for datasets that do not fit into memory.

BatchFlow BatchFlow helps you conveniently work with random or sequential batches of your data and define data processing and machine learning workflo

Data Analysis Center 185 Dec 20, 2022
dplyr for python

Dplython: Dplyr for Python Welcome to Dplython: Dplyr for Python. Dplyr is a library for the language R designed to make data analysis fast and easy.

Chris Riederer 754 Nov 21, 2022
Clean APIs for data cleaning. Python implementation of R package Janitor

pyjanitor pyjanitor is a Python implementation of the R package janitor, and provides a clean API for cleaning data. Why janitor? Originally a port of

Eric Ma 1.1k Jan 01, 2023