Lightweight, Python library for fast and reproducible experimentation :microscope:

Overview

Steppy

license

What is Steppy?

  1. Steppy is a lightweight, open-source, Python 3 library for fast and reproducible experimentation.
  2. Steppy lets data scientist focus on data science, not on software development issues.
  3. Steppy's minimal interface does not impose constraints, however, enables clean machine learning pipeline design.

What problem steppy solves?

Problems

In the course of the project, data scientist faces two problems:

  1. Difficulties with reproducibility in data science / machine learning projects.
  2. Lack of the ability to prepare or extend experiments quickly.

Solution

Steppy address both problems by introducing two simple abstractions: Step and Tranformer. We consider it minimal interface for building machine learning pipelines.

  1. Step is a wrapper over the transformer and handles multiple aspects of the execution of the pipeline, such as saving intermediate results (if needed), checkpointing the model during training and much more.
  2. Tranformer in turn, is purely computational, data scientist-defined piece that takes an input data and produces some output data. Typical Transformers are neural network, machine learning algorithms and pre- or post-processing routines.

Start using steppy

Installation

Steppy requires python3.5 or above.

pip3 install steppy

(you probably want to install it in your virtualenv)

Resources

  1. 📒 Documentation
  2. 💻 Source
  3. 📛 Bugs reports
  4. 🚀 Feature requests
  5. 🌟 Tutorial notebooks (their repository):

Feature Requests

Please send us your ideas on how to improve steppy library! We are looking for your comments here: Feature requests.

Roadmap

At this point steppy is early-stage library heavily tested on multiple machine learning challenges (data-science-bowl, toxic-comment-classification-challenge, mapping-challenge) and educational projects (minerva-advanced-data-scientific-training).

We are developing steppy towards practical tool for data scientists who can run their experiments easily and change their pipelines with just few manipulations in the code.

Related projects

We are also building steppy-toolkit, a collection of high quality implementations of the top deep learning architectures -> all of them with the same, intuitive interface.

Contributing

You are welcome to contribute to the Steppy library. Please check CONTRIBUTING for more information.

Terms of use

Steppy is MIT-licensed.

Comments
  • Concat features

    Concat features

    How is it possible to do the following Step in new version(use of pandas_concat_inputs)?:

                                        transformer=GroupbyAggregationsFeatures(AGGREGATION_RECIPIES),
                                        input_steps=[df_step],
                                        input_data=['input'],
                                        adapter=Adapter({
                                            'X': ([('input', 'X'),
                                                   (df_step.name, 'X')],
                                                  pandas_concat_inputs)
                                        }),
                                        cache_dirpath=config.env.cache_dirpath)
    opened by denyslazarenko 8
  • Docs3

    Docs3

    Pull Request template

    Doc contributions

    Contributing.html FAQ.html intro.html testdoc.html

    tested by running in docs/

    >>> (Steppy) sphinx-apidoc -o generated/ -d 4 -fMa ../steppy
     >>> (Steppy) clear;make clean;make html
    

    Regards Bruce

    core contributors to the minerva.ml

    opened by bcottman 6
  • How to evaluate each step only once?

    How to evaluate each step only once?

    I have the following structure of my steps. The problem is that many steps are called more than once and it makes the process of training very slow. Is it possible somehow to simplify it? more precisely, how to optimize this part? I would like to compute input_missing just once selection_105

    opened by denyslazarenko 4
  • Difference between cache and persist

    Difference between cache and persist

    I do not really get the difference between these two things. Both of them cache the result of execution in the disc. selection_114 Is it a good idea to add cache_output to all the Steps to avoid any executions twice? In some of your examples, you use both cache and persist at the same time, I think it is a good idea to use one of it... selection_115

    opened by denyslazarenko 2
  • ENH: Adds id to support output caching

    ENH: Adds id to support output caching

    Fixes https://github.com/neptune-ml/steppy/issues/39

    This PR adds an optional id field to data dictionary. When cache_output is set to True, theid field is appended to step.nameto distinguish between output caches produced by different data dictionaries.

    For example:

    data_train = {
        'id': 'data_train'
        'input': {
            'features': np.array([
                [1, 6],
                [2, 5],
                [3, 4]
            ]),
            'labels': np.array([2, 5, 3]),
        }
    }
    step = Step(
        name='test_cache_output_with_key',
        transformer=IdentityOperation(),
        input_data=['input'],
        experiment_directory='/exp_dir',
        cache_output=True
    )
    step.fit_transform(data_train)
    

    This will produce a output cache file at /exp_dir/cache/test_cache_output_with_key__data_train.

    opened by thomasjpfan 2
  • Simplified adapter syntax

    Simplified adapter syntax

    This is my idea for simplifying adapter syntax. The benefit is that importing the extractor E from the adapter module is no longer needed. On the other hand, the rules for deciding if something is an atomic recipe or part of a larger recipe or even a constant get more complicated.

    feature-request API-design 
    opened by mromaniukcdl 2
  • refactor adapter.py

    refactor adapter.py

    Problem: Currently User must from steppy.adapter import Adapter, E in order to use adapters.

    Refactor so that:

    • Use does not have to import E
    • add Example to docstrings

    Refactor is comprehensive, so that:

    • correct the code
    • correct tests
    • correct docstrings
    feature-request API-design 
    opened by kamil-kaczmarek 2
  • PyTorch model is never saved as checkpoint after first epoch

    PyTorch model is never saved as checkpoint after first epoch

    Look here: https://github.com/minerva-ml/gradus/blob/dev/steps/pytorch/callbacks.py#L266 If self.epoch_id is equal to 0, then loss_sum is equal to self.best_score and model is not saved. I think it should be fixed, because sometimes we want to have model after first epoch saved.

    bug feature-request 
    opened by apyskir 2
  • Unintuitive adapter syntax

    Unintuitive adapter syntax

    Current syntax for adapters has some peculiarities. Consider the following example.

            step = Step(
                name='ensembler',
                transformer=Dummy(),
                input_data=['input_1'],
                adapter={'X': [('input_1', 'features')]},
                cache_dirpath='.cache'
            )
    

    This step basically extracts one element of the input. It seems redundant to write brackets and parentheses. Doing adapter={'X': ('input_1', 'features')}, should be sufficient.

    Moreover, to my suprise adapter={'X': [('input_1', 'features'), ('input_2', 'extra_features')]}, is incorrect, and currently leads to ValueError: too many values to unpack (expected 2)

    My suggestions to make the syntax consistent are:

    1. adapter={'X': ('input_1', 'features')} should map X to extracted features.
    2. adapter={'X': [...]} should map X to a list of extracted objects (specified by elements of the list). In particular adapter={'X': [('input_1', 'features')]} should map X to a one-element list with extracted features.
    3. adapter={'X': ([...], func)} should extract appropriate objects and put them on the list, then func should be called on that list, and X should map to the result of that call.
    API-design 
    opened by grzes314 2
  • 2nd version docs for steppy

    2nd version docs for steppy

    Pull Request template

    Doc contributions

    This represents 0.01, where we/you were at 0.0? As you should be able to see I was able to use 95% of what was there previously. redid index.rst redid conf.py added directory docs.nbdocs

    needs more work . about days worth. before pushing out to read the docs.

    i found the docstrings very strong.

    i not very strongly suggest step-toolkit and steppy-examples be merged into one project.

    I see you use goggle-docstring-style. i will switch from numpy-style.

    Regards Bruce

    opened by bcottman 1
  • FAQ DOC

    FAQ DOC

    Started. intend on first pass to fill with my (naive/embarassing) discoveries and really good (i.e. incredibly stupid) questions and enlightening answers from gaggle.

    opened by bcottman 1
  • Let's make it possible to transform based on checkpoints

    Let's make it possible to transform based on checkpoints

    Hi! Let's assume I'm training a huge network for a lot of epochs and it saves checkpoints in checkpoints folder. I suggest to prepare a possibility to run transform on a pipeline, when transformer is not in experiment_dir/transformers, but a checkpoint is available in checkpoints folder. What do you think?

    opened by apyskir 0
  • Structure of steps - ideas for making it cleaner

    Structure of steps - ideas for making it cleaner

    @kamil-kaczmarek, @jakubczakon I know it is a bunch of different ideas and suggestions clustered in one issue. Let me know which of those are compatible with the current roadmap. (I am happy to contribute/collaborate on some.)

    • default data folder (e.g. ./.steppy/step_name/) or to be configurable if needed; overriding only when strictly necessary
    • no input_data; it complicates things for no obvious reason!
    • names optional, automatically generated from class names + number
    • more explicit job structure (steps = Sequence([step1, step2])); vide Keras API
    • adapters as inheriting from BaseTrainers,step = Rename({'a': 'aaa', 'b': 'bbb'}), vide rename in Pandas
    • how to separate persist-data vs persist-parameters? (e.g. for image preprocessing, it may be time-saving to save once processed images)
    • built-in data tests (e.g. len(X) == len(Y)), in def test
    • built-in test if persist->load is correct (i.e. loaded data is the same as saved)
    opened by stared 2
  • Do all Steps execute parallel?

    Do all Steps execute parallel?

    Is it necessary to divide executions inside my class to be separate Thread or just divide them between Steps? For example, I can to fit KNN, PCA in one class method and parallel them or create two separate classes for them...

    opened by denyslazarenko 2
  • Maybe load_saved_input?

    Maybe load_saved_input?

    Hi, I have a proposal: let's make it possible to dump adapted input of a step to disk. It's very handy when you are working on a 5th or 10th step in a pipeline that has 2,3 or more input steps. Now you have to set flag load_saved_output=True on each of the input steps to be able to work on your beloved step. If you could just set load_saved_input=True (adapted or not adapted, I think it's worth discussion) on the step you are currently working on, it would be much easier. What do you think?

    opened by apyskir 0
Releases(v0.1.16)
Owner
minerva.ml
minerva.ml
Code/data of the paper "Hand-Object Contact Prediction via Motion-Based Pseudo-Labeling and Guided Progressive Label Correction" (BMVC2021)

Hand-Object Contact Prediction (BMVC2021) This repository contains the code and data for the paper "Hand-Object Contact Prediction via Motion-Based Ps

Takuma Yagi 13 Nov 07, 2022
Python implementation of "Single Image Haze Removal Using Dark Channel Prior"

##Dependencies pillow(~2.6.0) Numpy(~1.9.0) If the scripts throw AttributeError: __float__, make sure your pillow has jpeg support e.g. try: $ sudo ap

Joyee Cheung 73 Dec 20, 2022
PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021.

GCResNet PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021. The code will

11 May 19, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
CodeContests is a competitive programming dataset for machine-learning

CodeContests CodeContests is a competitive programming dataset for machine-learning. This dataset was used when training AlphaCode. It consists of pro

DeepMind 1.6k Jan 08, 2023
[ICCV 2021] Code release for "Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks"

Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks By Yikai Wang, Yi Yang, Fuchun Sun, Anbang Yao. This is the pytorc

Yikai Wang 26 Nov 20, 2022
Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

ood-text-emnlp Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them" Files fine_tune.py is used to finetune the GPT-2 mo

Udit Arora 19 Oct 28, 2022
Self-Supervised Image Denoising via Iterative Data Refinement

Self-Supervised Image Denoising via Iterative Data Refinement Yi Zhang1, Dasong Li1, Ka Lung Law2, Xiaogang Wang1, Hongwei Qin2, Hongsheng Li1 1CUHK-S

Zhang Yi 72 Jan 01, 2023
Code for ICCV2021 paper PARE: Part Attention Regressor for 3D Human Body Estimation

PARE: Part Attention Regressor for 3D Human Body Estimation [ICCV 2021] PARE: Part Attention Regressor for 3D Human Body Estimation, Muhammed Kocabas,

Muhammed Kocabas 277 Jan 03, 2023
for taichi voxel-challange event

Taichi Voxel Challenge Figure: result of python3 example6.py. Please replace the image above (demo.jpg) with yours, so that other people can immediate

Liming Xu 20 Nov 26, 2022
This repository implements variational graph auto encoder by Thomas Kipf.

Variational Graph Auto-encoder in Pytorch This repository implements variational graph auto-encoder by Thomas Kipf. For details of the model, refer to

DaehanKim 215 Jan 02, 2023
Attack on Confidence Estimation algorithm from the paper "Disrupting Deep Uncertainty Estimation Without Harming Accuracy"

Attack on Confidence Estimation (ACE) This repository is the official implementation of "Disrupting Deep Uncertainty Estimation Without Harming Accura

3 Mar 30, 2022
Discord bot for notifying on github events

Git-Observer Discord bot for notifying on github events ⚠️ This bot is meant to write messages to only one channel (implementing this for multiple pro

ilu_vatar_ 0 Apr 19, 2022
Locally Most Powerful Bayesian Test for Out-of-Distribution Detection using Deep Generative Models

LMPBT Supplementary code for the Paper entitled ``Locally Most Powerful Bayesian Test for Out-of-Distribution Detection using Deep Generative Models"

1 Sep 29, 2022
A module for solving and visualizing Schrödinger equation.

qmsolve This is an attempt at making a solid, easy to use solver, capable of solving and visualize the Schrödinger equation for multiple particles, an

506 Dec 28, 2022
Accommodating supervised learning algorithms for the historical prices of the world's favorite cryptocurrency and boosting it through LightGBM.

Accommodating supervised learning algorithms for the historical prices of the world's favorite cryptocurrency and boosting it through LightGBM.

1 Nov 27, 2021
deep-prae

Deep Probabilistic Accelerated Evaluation (Deep-PrAE) Our work presents an efficient rare event simulation methodology for black box autonomy using Im

Safe AI Lab 4 Apr 17, 2021
This is a tensorflow-based rotation detection benchmark, also called AlphaRotate.

AlphaRotate: A Rotation Detection Benchmark using TensorFlow Abstract AlphaRotate is maintained by Xue Yang with Shanghai Jiao Tong University supervi

yangxue 972 Jan 05, 2023
Image Captioning using CNN ,LSTM and Attention

Image Captioning using CNN ,LSTM and Attention This is a deeplearning model which tries to summarize an image into a text . Installation Install this

ASUTOSH GHANTO 1 Dec 16, 2021
Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

FFD Source Code Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face M

88 Nov 22, 2022