Machine learning template for projects based on sklearn library.

Overview

Scikit-learn-project-template

About the project

  • Folder structure suitable for many machine learning projects. Especially for those with small amount of available training data.
  • .json config file support for convenient parameter tuning.
  • Customizable command line options for more convenient parameter tuning.
  • Abstract base classes for faster development:
    • BaseOptimizer handles execution of grid search, saving and loading of models and formation of test and train reports.
    • BaseDataLoader handles splitting of training and testing data. Spilt is performed depending on settings provided in config file.
    • BaseModel handles construction of consecutive steps defined in config file.

Getting Started

To get a local copy up and running follow steps below.

Requirements

  • Python >= 3.7
  • Packages included in requirements.txt file
  • (Anaconda for easy installation)

Install dependencies

Create and activate virtual environment:

conda create -n yourenvname python=3.7
conda activate yourenvname

Install packages:

python -m pip install -r requirements.txt

Folder Structure

sklearn-project-template/
│
├── main.py - main script to start training and (optionally) testing
│
├── base/ - abstract base classes
│   ├── base_data_loader.py
│   ├── base_model.py
│   └── base_optimizer.py
│
├── configs/ - holds configuration for training and testing
│   ├── config_classification.json
│   ├── config_regression.json
│
├── data/ - default directory for storing input data
│
├── data_loaders/ - anything about data loading goes here
│   └── data_loaders.py
│
├── models/ - models
│   ├── __init__.py - defined models by name
│   └── models.py
│
├── optimizers/ - optimizers
│   └── optimizers.py
│
├── saved/ - config, model and reports are saved here
│   ├── Classification
│   └── Regression
│
├── utils/ - utility functions
│   └── parse_config.py - class to handle config file and cli options
│   ├── utils.py
│
├── wrappers/ - wrappers of modified sklearn models or self defined transforms
│   ├── data_transformations.py
│   └── wrappers.py

Usage

Models in this repo are trained on two well-known datasets: iris and boston. First is used for classification and second for regression problem.

Run classification:

python main.py -c configs/config_classification.json

Run regression:

python main.py -c configs/config_regression.json

Config file format

Config files are in .json format. Example of such config is shown below:

{
    "name": "Classification",   // session name

    "model": {
        "type": "Model",    // model name
        "args": {
            "pipeline": ["scaler", "PLS", "pf", "SVC"]     // pipeline of methods
        }
    },

    "tuned_parameters":[{   // parameters to be tuned with search method
                        "SVC__kernel": ["rbf"],
                        "SVC__gamma": [1e-5, 1e-6, 1],
                        "SVC__C": [1, 100, 1000],
                        "PLS__n_components": [1,2,3]
                    }],

    "optimizer": "OptimizerClassification",    // name of optimizer

    "search_method":{
        "type": "GridSearchCV",    // method used to search through parameters
        "args": {
            "refit": false,
            "n_jobs": -1,
            "verbose": 2,
            "error_score": 0
        }
    },

    "cross_validation": {
        "type": "RepeatedStratifiedKFold",     // type of cross-validation used
        "args": {
            "n_splits": 5,
            "n_repeats": 10,
            "random_state": 1
        }
    },

    "data_loader": {
        "type": "Classification",      // name of dataloader class
        "args":{
            "data_path": "data/path-to-file",    // path to data
            "shuffle": true,    // if data shuffled before optimization
            "test_split": 0.2,  // use split method for model testing
            "stratify": true,   // if data stratified before optimization
            "random_state":1    // random state for repeaded output
        }
    },

    "score": "max balanced_accuracy",     // mode and metrics used for scoring
    "test_model": true,     // if model is tested after training
    "save_dir": "saved/"    // directory of saved reports, models and configs

}

Additional parameters can be added to config file. See SK-learn documentation for description of tuned parameters, search method and cross validation. Possible metrics for model evaluation could be found here.

Pipeline

Methods added to config pipeline must be first defined in models/__init__.py file. For previous example of config file the following must be added:

from wrappers import *
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import PolynomialFeatures

methods_dict = {
  'pf': PolynomialFeatures,
  'scaler': StandardScaler,
  'PLS':PLSRegressionWrapper,
  'SVC':SVC,
}

Majority of algorithms implemented in SK-learn library can be directly imported and used. Some algorithms need a little modification before usage. Such an example is Partial least squares (PLS). Modification is implemented in wrappers/wrappers.py. In case you want to implement your own method it can be done as well. An example wrapper for Savitzky golay filter is shown in wrappers/data_transformations.py. Implementation must satisfy standard method calls, eg. fit(), tranform() etc.

Customization

Custom CLI options

Changing values of config file is a clean, safe and easy way of tuning hyperparameters. However, sometimes it is better to have command line options if some values need to be changed too often or quickly.

This template uses the configurations stored in the json file by default, but by registering custom options as follows you can change some of them using CLI flags.

# simple class-like object having 3 attributes, `flags`, `type`, `target`.
CustomArgs = collections.namedtuple('CustomArgs', 'flags type target')
options = [
      CustomArgs(['-cv', '--cross_validation'], type=int, target='cross_validation;args;n_repeats'),
    # options added here can be modified by command line flags.
]

target argument should be sequence of keys, which are used to access that option in the config dict. In this example, target number of repeats in cross validation option is ('cross_validation', 'args', 'n_repeats') because config['cross_validation']['args']['n_repeats'] points to number of repeats.

Data Loader

  • Writing your own data loader
  1. Inherit BaseDataLoader

    BaseDataLoader handles:

    • Train/test procedure
    • Data shuffling
  • Usage

    Loaded data must be assigned to data_handler (dh) in appropriate manner. If dh.X_data_test and dh.y_data_test are not assigned in advance, train/test split could be created by base data loader. In case "test_split":0.0 is set in config file, whole dataset is used for training. Another option is to assign both train and test sets as shown below. In this case train data will be used for optimization and test data will be used for evaluation of a model.

    data_handler.X_data = X_train
    data_handler.y_data = y_train
    data_handler.X_data_test = X_test
    data_handler.y_data_test = y_test
  • Example

    Please refer to data_loaders/data_loaders.py for data loading example.

Optimizer

  • Writing your own optimizer
  1. Inherit BaseOptimizer

    BaseOptimizer handles:

    • Optimization procedure
    • Model saving and loading
    • Report saving
  2. Implementing abstract methods

    You need to implement fitted_model() which must return fitted model. Optionally you can implement format of train/test reports with create_train_report() and create_test_report().

  • Example

    Please refer to optimizers/optimizers.py for optimizer example.

Model

  • Writing your own model
  1. Inherit BaseModel

    BaseModel handles:

    • Initialization defined in config pipeline
    • Modification of steps
  2. Implementing abstract methods

    You need to implement created_model() which must return created model.

  • Usage

    Initialization of pipeline methods is performed with create_steps(). Steps can be later modified with the use of change_step(). An example on how to change a step is shown bellow where Sequential feature selector is added to the pipeline.

    def __init__(self, pipeline):
        steps = self.create_steps(pipeline)
    
        rf = RandomForestRegressor(random_state=1)
        clf = TransformedTargetRegressor(regressor=rf,
                                        func=np.log1p,
                                        inverse_func=np.expm1)
        sfs = SequentialFeatureSelector(clf, n_features_to_select=2, cv=3)
    
        steps = self.change_step('sfs', sfs, steps)
    
        self.model = Pipeline(steps=steps)

    Beware that in this case 'sfs' needs to be added to pipeline in config file. Otherwise, no step in the pipeline is changed.

  • Example

    Please refer to models/models.py model example.

Roadmap

See open issues to request a feature or report a bug.

Contribution

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

How to start with contribution:

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Feel free to contribute any kind of function or enhancement.

License

This project is licensed under the MIT License. See LICENSE for more details.

Acknowledgements

This project is inspired by the project pytorch-template by Victor Huang. I would like to confess that some functions, architecture and some parts of readme were directly copied from this repo. But to be honest, what should I do - the project is absolutely amazing!

Consider supporting

Do you feel generous today? I am still a student and would make a good use of some extra money :P

Owner
Janez Lapajne
Janez Lapajne
Add built-in support for quaternions to numpy

Quaternions in numpy This Python module adds a quaternion dtype to NumPy. The code was originally based on code by Martin Ling (which he wrote with he

Mike Boyle 531 Dec 28, 2022
Napari sklearn decomposition

napari-sklearn-decomposition A simple plugin to use with napari This napari plug

1 Sep 01, 2022
Automated Machine Learning Pipeline for tabular data. Designed for predictive maintenance applications, failure identification, failure prediction, condition monitoring, etc.

Automated Machine Learning Pipeline for tabular data. Designed for predictive maintenance applications, failure identification, failure prediction, condition monitoring, etc.

Amplo 10 May 15, 2022
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.

Horovod Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. The goal of Horovod is to make dis

Horovod 12.9k Jan 07, 2023
Time Series Prediction with tf.contrib.timeseries

TensorFlow-Time-Series-Examples Additional examples for TensorFlow Time Series(TFTS). Read a Time Series with TFTS From a Numpy Array: See "test_input

Zhiyuan He 476 Nov 17, 2022
Nevergrad - A gradient-free optimization platform

Nevergrad - A gradient-free optimization platform nevergrad is a Python 3.6+ library. It can be installed with: pip install nevergrad More installati

Meta Research 3.4k Jan 08, 2023
STUMPY is a powerful and scalable Python library for computing a Matrix Profile, which can be used for a variety of time series data mining tasks

STUMPY STUMPY is a powerful and scalable library that efficiently computes something called the matrix profile, which can be used for a variety of tim

TD Ameritrade 2.5k Jan 06, 2023
Empyrial is a Python-based open-source quantitative investment library dedicated to financial institutions and retail investors

By Investors, For Investors. Want to read this in Chinese? Click here Empyrial is a Python-based open-source quantitative investment library dedicated

Santosh 640 Dec 31, 2022
A Python library for choreographing your machine learning research.

A Python library for choreographing your machine learning research.

AI2 270 Jan 06, 2023
A Lightweight Hyperparameter Optimization Tool 🚀

The mle-hyperopt package provides a simple and intuitive API for hyperparameter optimization of your Machine Learning Experiment (MLE) pipeline.

Robert Lange 137 Dec 02, 2022
Apache (Py)Spark type annotations (stub files).

PySpark Stubs A collection of the Apache Spark stub files. These files were generated by stubgen and manually edited to include accurate type hints. T

Maciej 114 Nov 22, 2022
Simple data balancing baselines for worst-group-accuracy benchmarks.

BalancingGroups Code to replicate the experimental results from Simple data balancing baselines achieve competitive worst-group-accuracy. Replicating

Facebook Research 29 Dec 02, 2022
Confidence intervals for scikit-learn forest algorithms

forest-confidence-interval: Confidence intervals for Forest algorithms Forest algorithms are powerful ensemble methods for classification and regressi

272 Dec 01, 2022
Predicting Keystrokes using an Audio Side-Channel Attack and Machine Learning

Predicting Keystrokes using an Audio Side-Channel Attack and Machine Learning My

3 Apr 10, 2022
The Simpsons and Machine Learning: What makes an Episode Great?

The Simpsons and Machine Learning: What makes an Episode Great? Check out my Medium article on this! PROBLEM: The Simpsons has had a decline in qualit

1 Nov 02, 2021
This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch

This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch. It uses a simple TestEnvironment to test the algorithm

Martin Huber 59 Dec 09, 2022
Practical Time-Series Analysis, published by Packt

Practical Time-Series Analysis This is the code repository for Practical Time-Series Analysis, published by Packt. It contains all the supporting proj

Packt 325 Dec 23, 2022
Land Cover Classification Random Forest

You can perform Land Cover Classification on Satellite Images using Random Forest and visualize the result using Earthpy package. Make sure to install the required packages and such as

Dr. Sander Ali Khowaja 1 Jan 21, 2022
Predico Disease Prediction system based on symptoms provided by patient- using Python-Django & Machine Learning

Predico Disease Prediction system based on symptoms provided by patient- using Python-Django & Machine Learning

Felix Daudi 1 Jan 06, 2022
Data Efficient Decision Making

Data Efficient Decision Making

Microsoft 197 Jan 06, 2023