Machine learning template for projects based on sklearn library.

Overview

Scikit-learn-project-template

About the project

  • Folder structure suitable for many machine learning projects. Especially for those with small amount of available training data.
  • .json config file support for convenient parameter tuning.
  • Customizable command line options for more convenient parameter tuning.
  • Abstract base classes for faster development:
    • BaseOptimizer handles execution of grid search, saving and loading of models and formation of test and train reports.
    • BaseDataLoader handles splitting of training and testing data. Spilt is performed depending on settings provided in config file.
    • BaseModel handles construction of consecutive steps defined in config file.

Getting Started

To get a local copy up and running follow steps below.

Requirements

  • Python >= 3.7
  • Packages included in requirements.txt file
  • (Anaconda for easy installation)

Install dependencies

Create and activate virtual environment:

conda create -n yourenvname python=3.7
conda activate yourenvname

Install packages:

python -m pip install -r requirements.txt

Folder Structure

sklearn-project-template/
│
├── main.py - main script to start training and (optionally) testing
│
├── base/ - abstract base classes
│   ├── base_data_loader.py
│   ├── base_model.py
│   └── base_optimizer.py
│
├── configs/ - holds configuration for training and testing
│   ├── config_classification.json
│   ├── config_regression.json
│
├── data/ - default directory for storing input data
│
├── data_loaders/ - anything about data loading goes here
│   └── data_loaders.py
│
├── models/ - models
│   ├── __init__.py - defined models by name
│   └── models.py
│
├── optimizers/ - optimizers
│   └── optimizers.py
│
├── saved/ - config, model and reports are saved here
│   ├── Classification
│   └── Regression
│
├── utils/ - utility functions
│   └── parse_config.py - class to handle config file and cli options
│   ├── utils.py
│
├── wrappers/ - wrappers of modified sklearn models or self defined transforms
│   ├── data_transformations.py
│   └── wrappers.py

Usage

Models in this repo are trained on two well-known datasets: iris and boston. First is used for classification and second for regression problem.

Run classification:

python main.py -c configs/config_classification.json

Run regression:

python main.py -c configs/config_regression.json

Config file format

Config files are in .json format. Example of such config is shown below:

{
    "name": "Classification",   // session name

    "model": {
        "type": "Model",    // model name
        "args": {
            "pipeline": ["scaler", "PLS", "pf", "SVC"]     // pipeline of methods
        }
    },

    "tuned_parameters":[{   // parameters to be tuned with search method
                        "SVC__kernel": ["rbf"],
                        "SVC__gamma": [1e-5, 1e-6, 1],
                        "SVC__C": [1, 100, 1000],
                        "PLS__n_components": [1,2,3]
                    }],

    "optimizer": "OptimizerClassification",    // name of optimizer

    "search_method":{
        "type": "GridSearchCV",    // method used to search through parameters
        "args": {
            "refit": false,
            "n_jobs": -1,
            "verbose": 2,
            "error_score": 0
        }
    },

    "cross_validation": {
        "type": "RepeatedStratifiedKFold",     // type of cross-validation used
        "args": {
            "n_splits": 5,
            "n_repeats": 10,
            "random_state": 1
        }
    },

    "data_loader": {
        "type": "Classification",      // name of dataloader class
        "args":{
            "data_path": "data/path-to-file",    // path to data
            "shuffle": true,    // if data shuffled before optimization
            "test_split": 0.2,  // use split method for model testing
            "stratify": true,   // if data stratified before optimization
            "random_state":1    // random state for repeaded output
        }
    },

    "score": "max balanced_accuracy",     // mode and metrics used for scoring
    "test_model": true,     // if model is tested after training
    "save_dir": "saved/"    // directory of saved reports, models and configs

}

Additional parameters can be added to config file. See SK-learn documentation for description of tuned parameters, search method and cross validation. Possible metrics for model evaluation could be found here.

Pipeline

Methods added to config pipeline must be first defined in models/__init__.py file. For previous example of config file the following must be added:

from wrappers import *
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import PolynomialFeatures

methods_dict = {
  'pf': PolynomialFeatures,
  'scaler': StandardScaler,
  'PLS':PLSRegressionWrapper,
  'SVC':SVC,
}

Majority of algorithms implemented in SK-learn library can be directly imported and used. Some algorithms need a little modification before usage. Such an example is Partial least squares (PLS). Modification is implemented in wrappers/wrappers.py. In case you want to implement your own method it can be done as well. An example wrapper for Savitzky golay filter is shown in wrappers/data_transformations.py. Implementation must satisfy standard method calls, eg. fit(), tranform() etc.

Customization

Custom CLI options

Changing values of config file is a clean, safe and easy way of tuning hyperparameters. However, sometimes it is better to have command line options if some values need to be changed too often or quickly.

This template uses the configurations stored in the json file by default, but by registering custom options as follows you can change some of them using CLI flags.

# simple class-like object having 3 attributes, `flags`, `type`, `target`.
CustomArgs = collections.namedtuple('CustomArgs', 'flags type target')
options = [
      CustomArgs(['-cv', '--cross_validation'], type=int, target='cross_validation;args;n_repeats'),
    # options added here can be modified by command line flags.
]

target argument should be sequence of keys, which are used to access that option in the config dict. In this example, target number of repeats in cross validation option is ('cross_validation', 'args', 'n_repeats') because config['cross_validation']['args']['n_repeats'] points to number of repeats.

Data Loader

  • Writing your own data loader
  1. Inherit BaseDataLoader

    BaseDataLoader handles:

    • Train/test procedure
    • Data shuffling
  • Usage

    Loaded data must be assigned to data_handler (dh) in appropriate manner. If dh.X_data_test and dh.y_data_test are not assigned in advance, train/test split could be created by base data loader. In case "test_split":0.0 is set in config file, whole dataset is used for training. Another option is to assign both train and test sets as shown below. In this case train data will be used for optimization and test data will be used for evaluation of a model.

    data_handler.X_data = X_train
    data_handler.y_data = y_train
    data_handler.X_data_test = X_test
    data_handler.y_data_test = y_test
  • Example

    Please refer to data_loaders/data_loaders.py for data loading example.

Optimizer

  • Writing your own optimizer
  1. Inherit BaseOptimizer

    BaseOptimizer handles:

    • Optimization procedure
    • Model saving and loading
    • Report saving
  2. Implementing abstract methods

    You need to implement fitted_model() which must return fitted model. Optionally you can implement format of train/test reports with create_train_report() and create_test_report().

  • Example

    Please refer to optimizers/optimizers.py for optimizer example.

Model

  • Writing your own model
  1. Inherit BaseModel

    BaseModel handles:

    • Initialization defined in config pipeline
    • Modification of steps
  2. Implementing abstract methods

    You need to implement created_model() which must return created model.

  • Usage

    Initialization of pipeline methods is performed with create_steps(). Steps can be later modified with the use of change_step(). An example on how to change a step is shown bellow where Sequential feature selector is added to the pipeline.

    def __init__(self, pipeline):
        steps = self.create_steps(pipeline)
    
        rf = RandomForestRegressor(random_state=1)
        clf = TransformedTargetRegressor(regressor=rf,
                                        func=np.log1p,
                                        inverse_func=np.expm1)
        sfs = SequentialFeatureSelector(clf, n_features_to_select=2, cv=3)
    
        steps = self.change_step('sfs', sfs, steps)
    
        self.model = Pipeline(steps=steps)

    Beware that in this case 'sfs' needs to be added to pipeline in config file. Otherwise, no step in the pipeline is changed.

  • Example

    Please refer to models/models.py model example.

Roadmap

See open issues to request a feature or report a bug.

Contribution

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

How to start with contribution:

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Feel free to contribute any kind of function or enhancement.

License

This project is licensed under the MIT License. See LICENSE for more details.

Acknowledgements

This project is inspired by the project pytorch-template by Victor Huang. I would like to confess that some functions, architecture and some parts of readme were directly copied from this repo. But to be honest, what should I do - the project is absolutely amazing!

Consider supporting

Do you feel generous today? I am still a student and would make a good use of some extra money :P

Owner
Janez Lapajne
Janez Lapajne
Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale.

Model Search Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale. It aims to help researchers sp

AriesTriputranto 1 Dec 13, 2021
PyPOTS - A Python Toolbox for Data Mining on Partially-Observed Time Series

A python toolbox/library for data mining on partially-observed time series, supporting tasks of forecasting/imputation/classification/clustering on incomplete multivariate time series with missing va

Wenjie Du 179 Dec 31, 2022
Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Python Extreme Learning Machine (ELM) Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Augusto Almeida 84 Nov 25, 2022
Forecast dynamically at scale with this unique package. pip install scalecast

🌄 Scalecast: Dynamic Forecasting at Scale About This package uses a scaleable forecasting approach in Python with common scikit-learn and statsmodels

Michael Keith 158 Jan 03, 2023
Learn Machine Learning Algorithms by doing projects in Python and R Programming Language

Learn Machine Learning Algorithms by doing projects in Python and R Programming Language. This repo covers all aspect of Machine Learning Algorithms.

Ravi Chaubey 6 Oct 20, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 08, 2023
Self Organising Map (SOM) for clustering of atomistic samples through unsupervised learning.

Self Organising Map for Clustering of Atomistic Samples - V2 Description Self Organising Map (also known as Kohonen Network) implemented in Python for

Franco Aquistapace 0 Nov 16, 2021
Educational python for Neural Networks, written in pure Python/NumPy.

Educational python for Neural Networks, written in pure Python/NumPy.

127 Oct 27, 2022
PennyLane is a cross-platform Python library for differentiable programming of quantum computers

PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural ne

PennyLaneAI 1.6k Jan 01, 2023
Pandas DataFrames and Series as Interactive Tables in Jupyter

Pandas DataFrames and Series as Interactive Tables in Jupyter Star Turn pandas DataFrames and Series into interactive datatables in both your notebook

Marc Wouts 364 Jan 04, 2023
A Python-based application demonstrating various search algorithms, namely Depth-First Search (DFS), Breadth-First Search (BFS), and A* Search (Manhattan Distance Heuristic)

A Python-based application demonstrating various search algorithms, namely Depth-First Search (DFS), Breadth-First Search (BFS), and the A* Search (using the Manhattan Distance Heuristic)

17 Aug 14, 2022
The Simpsons and Machine Learning: What makes an Episode Great?

The Simpsons and Machine Learning: What makes an Episode Great? Check out my Medium article on this! PROBLEM: The Simpsons has had a decline in qualit

1 Nov 02, 2021
ETNA is an easy-to-use time series forecasting framework.

ETNA is an easy-to-use time series forecasting framework. It includes built in toolkits for time series preprocessing, feature generation, a variety of predictive models with unified interface - from

Tinkoff.AI 674 Jan 07, 2023
The Emergence of Individuality

The Emergence of Individuality

16 Jul 20, 2022
PyTorch extensions for high performance and large scale training.

Description FairScale is a PyTorch extension library for high performance and large scale training on one or multiple machines/nodes. This library ext

Facebook Research 2k Dec 28, 2022
QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.

QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.

152 Jan 02, 2023
Contains an implementation (sklearn API) of the algorithm proposed in "GENDIS: GEnetic DIscovery of Shapelets" and code to reproduce all experiments.

GENDIS GENetic DIscovery of Shapelets In the time series classification domain, shapelets are small subseries that are discriminative for a certain cl

IDLab Services 90 Oct 28, 2022
This is a curated list of medical data for machine learning

Medical Data for Machine Learning This is a curated list of medical data for machine learning. This list is provided for informational purposes only,

Andrew L. Beam 5.4k Dec 26, 2022
This repository contains full machine learning pipeline of the Zillow Houses competition on Kaggle platform.

Zillow-Houses This repository contains full machine learning pipeline of the Zillow Houses competition on Kaggle platform. Pipeline is consists of 10

2 Jan 09, 2022
Practical Time-Series Analysis, published by Packt

Practical Time-Series Analysis This is the code repository for Practical Time-Series Analysis, published by Packt. It contains all the supporting proj

Packt 325 Dec 23, 2022