Machine learning template for projects based on sklearn library.

Overview

Scikit-learn-project-template

About the project

  • Folder structure suitable for many machine learning projects. Especially for those with small amount of available training data.
  • .json config file support for convenient parameter tuning.
  • Customizable command line options for more convenient parameter tuning.
  • Abstract base classes for faster development:
    • BaseOptimizer handles execution of grid search, saving and loading of models and formation of test and train reports.
    • BaseDataLoader handles splitting of training and testing data. Spilt is performed depending on settings provided in config file.
    • BaseModel handles construction of consecutive steps defined in config file.

Getting Started

To get a local copy up and running follow steps below.

Requirements

  • Python >= 3.7
  • Packages included in requirements.txt file
  • (Anaconda for easy installation)

Install dependencies

Create and activate virtual environment:

conda create -n yourenvname python=3.7
conda activate yourenvname

Install packages:

python -m pip install -r requirements.txt

Folder Structure

sklearn-project-template/
│
├── main.py - main script to start training and (optionally) testing
│
├── base/ - abstract base classes
│   ├── base_data_loader.py
│   ├── base_model.py
│   └── base_optimizer.py
│
├── configs/ - holds configuration for training and testing
│   ├── config_classification.json
│   ├── config_regression.json
│
├── data/ - default directory for storing input data
│
├── data_loaders/ - anything about data loading goes here
│   └── data_loaders.py
│
├── models/ - models
│   ├── __init__.py - defined models by name
│   └── models.py
│
├── optimizers/ - optimizers
│   └── optimizers.py
│
├── saved/ - config, model and reports are saved here
│   ├── Classification
│   └── Regression
│
├── utils/ - utility functions
│   └── parse_config.py - class to handle config file and cli options
│   ├── utils.py
│
├── wrappers/ - wrappers of modified sklearn models or self defined transforms
│   ├── data_transformations.py
│   └── wrappers.py

Usage

Models in this repo are trained on two well-known datasets: iris and boston. First is used for classification and second for regression problem.

Run classification:

python main.py -c configs/config_classification.json

Run regression:

python main.py -c configs/config_regression.json

Config file format

Config files are in .json format. Example of such config is shown below:

{
    "name": "Classification",   // session name

    "model": {
        "type": "Model",    // model name
        "args": {
            "pipeline": ["scaler", "PLS", "pf", "SVC"]     // pipeline of methods
        }
    },

    "tuned_parameters":[{   // parameters to be tuned with search method
                        "SVC__kernel": ["rbf"],
                        "SVC__gamma": [1e-5, 1e-6, 1],
                        "SVC__C": [1, 100, 1000],
                        "PLS__n_components": [1,2,3]
                    }],

    "optimizer": "OptimizerClassification",    // name of optimizer

    "search_method":{
        "type": "GridSearchCV",    // method used to search through parameters
        "args": {
            "refit": false,
            "n_jobs": -1,
            "verbose": 2,
            "error_score": 0
        }
    },

    "cross_validation": {
        "type": "RepeatedStratifiedKFold",     // type of cross-validation used
        "args": {
            "n_splits": 5,
            "n_repeats": 10,
            "random_state": 1
        }
    },

    "data_loader": {
        "type": "Classification",      // name of dataloader class
        "args":{
            "data_path": "data/path-to-file",    // path to data
            "shuffle": true,    // if data shuffled before optimization
            "test_split": 0.2,  // use split method for model testing
            "stratify": true,   // if data stratified before optimization
            "random_state":1    // random state for repeaded output
        }
    },

    "score": "max balanced_accuracy",     // mode and metrics used for scoring
    "test_model": true,     // if model is tested after training
    "save_dir": "saved/"    // directory of saved reports, models and configs

}

Additional parameters can be added to config file. See SK-learn documentation for description of tuned parameters, search method and cross validation. Possible metrics for model evaluation could be found here.

Pipeline

Methods added to config pipeline must be first defined in models/__init__.py file. For previous example of config file the following must be added:

from wrappers import *
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import PolynomialFeatures

methods_dict = {
  'pf': PolynomialFeatures,
  'scaler': StandardScaler,
  'PLS':PLSRegressionWrapper,
  'SVC':SVC,
}

Majority of algorithms implemented in SK-learn library can be directly imported and used. Some algorithms need a little modification before usage. Such an example is Partial least squares (PLS). Modification is implemented in wrappers/wrappers.py. In case you want to implement your own method it can be done as well. An example wrapper for Savitzky golay filter is shown in wrappers/data_transformations.py. Implementation must satisfy standard method calls, eg. fit(), tranform() etc.

Customization

Custom CLI options

Changing values of config file is a clean, safe and easy way of tuning hyperparameters. However, sometimes it is better to have command line options if some values need to be changed too often or quickly.

This template uses the configurations stored in the json file by default, but by registering custom options as follows you can change some of them using CLI flags.

# simple class-like object having 3 attributes, `flags`, `type`, `target`.
CustomArgs = collections.namedtuple('CustomArgs', 'flags type target')
options = [
      CustomArgs(['-cv', '--cross_validation'], type=int, target='cross_validation;args;n_repeats'),
    # options added here can be modified by command line flags.
]

target argument should be sequence of keys, which are used to access that option in the config dict. In this example, target number of repeats in cross validation option is ('cross_validation', 'args', 'n_repeats') because config['cross_validation']['args']['n_repeats'] points to number of repeats.

Data Loader

  • Writing your own data loader
  1. Inherit BaseDataLoader

    BaseDataLoader handles:

    • Train/test procedure
    • Data shuffling
  • Usage

    Loaded data must be assigned to data_handler (dh) in appropriate manner. If dh.X_data_test and dh.y_data_test are not assigned in advance, train/test split could be created by base data loader. In case "test_split":0.0 is set in config file, whole dataset is used for training. Another option is to assign both train and test sets as shown below. In this case train data will be used for optimization and test data will be used for evaluation of a model.

    data_handler.X_data = X_train
    data_handler.y_data = y_train
    data_handler.X_data_test = X_test
    data_handler.y_data_test = y_test
  • Example

    Please refer to data_loaders/data_loaders.py for data loading example.

Optimizer

  • Writing your own optimizer
  1. Inherit BaseOptimizer

    BaseOptimizer handles:

    • Optimization procedure
    • Model saving and loading
    • Report saving
  2. Implementing abstract methods

    You need to implement fitted_model() which must return fitted model. Optionally you can implement format of train/test reports with create_train_report() and create_test_report().

  • Example

    Please refer to optimizers/optimizers.py for optimizer example.

Model

  • Writing your own model
  1. Inherit BaseModel

    BaseModel handles:

    • Initialization defined in config pipeline
    • Modification of steps
  2. Implementing abstract methods

    You need to implement created_model() which must return created model.

  • Usage

    Initialization of pipeline methods is performed with create_steps(). Steps can be later modified with the use of change_step(). An example on how to change a step is shown bellow where Sequential feature selector is added to the pipeline.

    def __init__(self, pipeline):
        steps = self.create_steps(pipeline)
    
        rf = RandomForestRegressor(random_state=1)
        clf = TransformedTargetRegressor(regressor=rf,
                                        func=np.log1p,
                                        inverse_func=np.expm1)
        sfs = SequentialFeatureSelector(clf, n_features_to_select=2, cv=3)
    
        steps = self.change_step('sfs', sfs, steps)
    
        self.model = Pipeline(steps=steps)

    Beware that in this case 'sfs' needs to be added to pipeline in config file. Otherwise, no step in the pipeline is changed.

  • Example

    Please refer to models/models.py model example.

Roadmap

See open issues to request a feature or report a bug.

Contribution

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

How to start with contribution:

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Feel free to contribute any kind of function or enhancement.

License

This project is licensed under the MIT License. See LICENSE for more details.

Acknowledgements

This project is inspired by the project pytorch-template by Victor Huang. I would like to confess that some functions, architecture and some parts of readme were directly copied from this repo. But to be honest, what should I do - the project is absolutely amazing!

Consider supporting

Do you feel generous today? I am still a student and would make a good use of some extra money :P

Owner
Janez Lapajne
Janez Lapajne
Bottleneck a collection of fast, NaN-aware NumPy array functions written in C.

Bottleneck Bottleneck is a collection of fast, NaN-aware NumPy array functions written in C. As one example, to check if a np.array has any NaNs using

Python for Data 835 Dec 27, 2022
AP1 Transcription Factor Binding Site Prediction

A machine learning project that predicted binding sites of AP1 transcription factor, using ChIP-Seq data and local DNA shape information.

1 Jan 21, 2022
Repositório para o #alurachallengedatascience1

1° Challenge de Dados - Alura A Alura Voz é uma empresa de telecomunicação que nos contratou para atuar como cientistas de dados na equipe de vendas.

Sthe Monica 16 Nov 10, 2022
A GitHub action that suggests type annotations for Python using machine learning.

Typilus: Suggest Python Type Annotations A GitHub action that suggests type annotations for Python using machine learning. This action makes suggestio

40 Sep 18, 2022
Open-Source CI/CD platform for ML teams. Deliver ML products, better & faster. ⚡️🧑‍🔧

Deliver ML products, better & faster Giskard is an Open-Source CI/CD platform for ML teams. Inspect ML models visually from your Python notebook 📗 Re

Giskard 335 Jan 04, 2023
Implementation of K-Nearest Neighbors Algorithm Using PySpark

KNN With Spark Implementation of KNN using PySpark. The KNN was used on two separate datasets (https://archive.ics.uci.edu/ml/datasets/iris and https:

Zachary Petroff 4 Dec 30, 2022
Predicting diabetes over a five year period using logistic regression and the Pima First-Nation dataset

Diabetes This script uses the Pima First Nations dataset to create a model to predict whether or not an individual will develop Diabetes Mellitus Type

1 Mar 28, 2022
A Python Package to Tackle the Curse of Imbalanced Datasets in Machine Learning

imbalanced-learn imbalanced-learn is a python package offering a number of re-sampling techniques commonly used in datasets showing strong between-cla

6.2k Jan 01, 2023
A chain of stores, 10 different stores and 50 different requests a 3-month demand forecast for its product.

Demand-Forecasting Business Problem A chain of stores, 10 different stores and 50 different requests a 3-month demand forecast for its product.

Ayşe Nur Türkaslan 3 Mar 06, 2022
Short PhD seminar on Machine Learning Security (Adversarial Machine Learning)

Short PhD seminar on Machine Learning Security (Adversarial Machine Learning)

141 Dec 27, 2022
A quick reference guide to the most commonly used patterns and functions in PySpark SQL

Using PySpark we can process data from Hadoop HDFS, AWS S3, and many file systems. PySpark also is used to process real-time data using Streaming and

Sundar Ramamurthy 53 Dec 21, 2022
Distributed Tensorflow, Keras and PyTorch on Apache Spark/Flink & Ray

A unified Data Analytics and AI platform for distributed TensorFlow, Keras and PyTorch on Apache Spark/Flink & Ray What is Analytics Zoo? Analytics Zo

2.5k Dec 28, 2022
Deploy AutoML as a service using Flask

AutoML Service Deploy automated machine learning (AutoML) as a service using Flask, for both pipeline training and pipeline serving. The framework imp

Chris Rawles 221 Nov 04, 2022
Data Efficient Decision Making

Data Efficient Decision Making

Microsoft 197 Jan 06, 2023
Open source time series library for Python

PyFlux PyFlux is an open source time series library for Python. The library has a good array of modern time series models, as well as a flexible array

Ross Taylor 2k Jan 02, 2023
Meerkat provides fast and flexible data structures for working with complex machine learning datasets.

Meerkat makes it easier for ML practitioners to interact with high-dimensional, multi-modal data. It provides simple abstractions for data inspection, model evaluation and model training supported by

Robustness Gym 115 Dec 12, 2022
Painless Machine Learning for python based on scikit-learn

PlainML Painless Machine Learning Library for python based on scikit-learn. Install pip install plainml Example from plainml import KnnModel, load_ir

1 Aug 06, 2022
Mesh TensorFlow: Model Parallelism Made Easier

Mesh TensorFlow - Model Parallelism Made Easier Introduction Mesh TensorFlow (mtf) is a language for distributed deep learning, capable of specifying

1.3k Dec 26, 2022
Bodywork deploys machine learning projects developed in Python, to Kubernetes.

Bodywork deploys machine learning projects developed in Python, to Kubernetes. It helps you to: serve models as microservices execute batch jobs run r

Bodywork Machine Learning 409 Jan 01, 2023
🚪✊Knock Knock: Get notified when your training ends with only two additional lines of code

Knock Knock A small library to get a notification when your training is complete or when it crashes during the process with two additional lines of co

Hugging Face 2.5k Jan 07, 2023