Semi-Automated Data Processing

Overview

Semi-Automated Data Processing

Preparing data for model learning is one of the most important steps in any project—and traditionally, one of the most time consuming. Data Analysis plays a very important role in the entire Data Science Workflow. In fact, this takes most of the time of the Data science Workflow. There’s a nice quote (not sure who said it)According to Wikipedia, In statistics, exploratory data analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task. Exploratory data analysis was promoted by John Tukey to encourage statisticians to explore the data, and possibly formulate hypotheses that could lead to new data collection and experiments.**“In Data Science, 80% of time spent prepare data, 20% of time spent complain about need for prepare data.”*

This projects handles the task with minimal user interaction by analyzing your data and identifying fixes, screening out fields that are problematic or not likely to be useful, deriving new attributes when appropriate, and improving performance through intelligent screening techniques. You can use the project in semi-interactive fashion, previewing the changes before they are made and accept or reject them as you want.

This project cover the 3 steps in any project workflow, comes before the model training:
1) Exploratory data analysis
2) Feature engineering
3) Feature selection


All these steps has to be carried out by the user by calling the several functions as follows:

1) identify_feature(data)=
This function identifies the categorical, continuous numerical and discrete numerical features in the datset. It also identifies datetime feature and extracts the relevant info from it.

Input:
data=Dataset

Output:
df=Dataset
data_cont_num_feature= List of features names associated containing continuous numerical values
data_dis_num_feature=List of features names associated containing discrete numerical values
data_cat_feature=List of features names associated containing categorical values
dt_feature=List of features names associated containing datetime values

2) plot_nan_feature(data, continuous_features, discrete_features, categorical_features,dependent_var)=
It identifies the missing values in the dataset and visualize them their impact on dependent feature.

Input:
data=Dataset
continuous_features= List of features names associated containing continuous numerical values
discrete_features=List of features names associated containing discrete numerical values
categorical_features=List of features names associated containing categorical values
dependent_var= Dependent feature name in string format

Output:
df= Dataset
nan_features= List of feature names containing NaN values

3) visualize_imputation_impact(data,continuous_features, discrete_features, categorical_features,nan_features,dependent_var):
The function visualizes the impact of different NaN value impution on the distribution of values the feature.

Input:
data=Dataset
continuous_features= List of features names associated containing continuous numerical values
discrete_features=List of features names associated containing discrete numerical values
categorical_features=List of features names associated containing categorical values
nan_features= List of feature names containing NaN values
dependent_var= Dependent feature name in string format

Output:
None

4) nan_imputation(data,mean_feature,median_feature,mode_feature,random_feature,new_category):
The function imputes the NaN values in the feature as per the user input.

Input:
data=Dataset
mean_feature= List of feature names in which we have to carry out mean_imputation
median_feature=List of feature names in which we have to carry out median_imputation
mode_feature=List of feature names in which we have to carry out mode_imputation
random_feature=List of feature names in which we have to carry out random_imputation
new_category=List of feature names in which we we create a new category for the NaN values

Output:
None

5) cross_visualization(data,continuous_features,discrete_features, categorical_features,dt_features):
The function visualise the relationship between the different independent features.

Input:
df=Dataset
data_cont_num_feature= List of features names associated containing continuous numerical values
data_dis_num_feature=List of features names associated containing discrete numerical values
data_cat_feature=List of features names associated containing categorical values
dt_feature=List of features names associated containing datetime values

Output:
continuous_features2=List of features names associated containing continuous numerical values, except the dependent feature

6) dependent_independent_visualization(data,continuous_features,discrete_features, categorical_features,dt_features,dependent_feature):
The function visualise the relationship between the different independent features.

Input:
data_cont_num_feature= List of features names associated containing continuous numerical values
data_dis_num_feature=List of features names associated containing discrete numerical values
data_cat_feature=List of features names associated containing categorical values
dt_feature=List of features names associated containing datetime values
dependent_var= Dependent feature name in string format

Output:
None

7) outlier_removal(data,continuous_features,discrete_features,dependent_var,dependent_var_type,action):
The function visualizes the outlliers using the boxplot and removes them.

Input:
data=Dataset
continuous_features= List of features names associated containing continuous numerical values
discrete_features=List of features names associated containing discrete numerical values
dependent_var= Dependent feature name in string format
dependent_var_type= Contain string tells if the problem is regression (than use 'Regression') or else
action= Give input as 'remove' to delete the rows associated with the outliers

Output:
df=Dataset

8) transformation_visualization(data,continuous_features,discrete_features,dependent_feature):
The function visualize the feature after performing various transormation techniques.

Input:
data=Dataset
continuous_features= List of features names associated containing continuous numerical values
discrete_features=List of features names associated containing discrete numerical values
dependent_feature= Dependent feature name in string format

Output:
None

9) feature_transformation(train_data,continuous_features,discrete_features,transformation,dependent_feature):
The function performing the feature transormation technique as per the user input.

Input:
train_data=Training dataset
continuous_features= List of features names associated containing continuous numerical values
discrete_features=List of features names associated containing discrete numerical values
transformation=Type of transformation: none=No transformation, log=Log Transformation, sqrt= Square root Transformation, reciprocal= Reciprocal Transformation, exp= Exponential Transformation, boxcox=Boxcox Transformation
dependent_feature= Dependent feature name in string format

Output:
X_data=Training dataset

10) categorical_transformation(train_data,categorical_encoding):
This function transforms the categorical featres in the numerical ones using encoding techniques.

Input:
train_data=Training dataset
categorical_encoding={'one_hot_encoding':[],'frequency_encoding':[],'mean_encoding':[],'target_guided_ordinal_encoding':{}}

Output:
X_data=Training dataset

11a) feature_selection(Xtrain,ytrain, threshold, data_type, filter_type):
This function performs the feature selection based on the dependent and independent features in train dataset.

Input:
Xtrain=Training dataset
ytrain=dependent data in training dataset
threshold= Threshold for the correlation
{'in_num_out_num':{'linear':['pearson'],'non-linear':['spearman']},
'in_num_out_cat':{'linear':['ANOVA'],'non-linear':['kendall']},
'in_cat_out_num':{'linear':['ANOVA'],'non-linear':['kendall']},
'in_cat_out_cat':{'chi_square_test':True,'mutual_info':True},}
data_type= Data linear or non-linearly dependent on the output label
filter_type= If input data is numerical and output is numerical then --'in_num_out_num' as shown in the above dictionary

Output:
Xtrain= Training dataset
feature_df= Dataframe containig features with their pvalue

11b) feature_selection(Xtrain,ytrain,Xtest,ytest, threshold, data_type, filter_type):
This function performs the feature selection based on the dependent and independent features in train dataset.

Input:
Xtrain=Training dataset
ytrain=dependent data in training dataset
Xtest=Test dataset
ytest=dependent data in test dataset
threshold= Threshold for the correlation
{'in_num_out_num':{'linear':['pearson'],'non-linear':['spearman']},
'in_num_out_cat':{'linear':['ANOVA'],'non-linear':['kendall']},
'in_cat_out_num':{'linear':['ANOVA'],'non-linear':['kendall']},
'in_cat_out_cat':{'chi_square_test':True,'mutual_info':True},}
data_type= Data linear or non-linearly dependent on the output label
filter_type= If input data is numerical and output is numerical then --'in_num_out_num' as shown in the above dictionary

Output:
Xtrain= Training dataset
Xtest= Test dataset
feature_df= Dataframe containig features with their pvalue

12) convert_dtype(data,categorical_features):
This function converts the categorical fetaures containing the numeric values but presented as categorical into the int format.

Input:
data= Dataset
categorical_features=List of features names associated containing categorical values

Output:
df=Dataset

Note:
Use same paramters for both train and test dataset for better accuracy


We have implemented a bike sharing project to describe how the functions can be used for both the classification and regression problem statement.

Owner
Arun Singh Babal
Engineer | Data Science Enthusiasts | Machine Learning | Deep Learning | Advanced Computer Vision.
Arun Singh Babal
Feature Detection Based Template Matching

Feature Detection Based Template Matching The classification of the photos was made using the OpenCv template Matching method. Installation Use the pa

Muhammet Erem 2 Nov 18, 2021
The official repository for ROOT: analyzing, storing and visualizing big data, scientifically

About The ROOT system provides a set of OO frameworks with all the functionality needed to handle and analyze large amounts of data in a very efficien

ROOT 2k Dec 29, 2022
Dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

Dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

dbt Labs 6.3k Jan 08, 2023
DaDRA (day-druh) is a Python library for Data-Driven Reachability Analysis.

DaDRA (day-druh) is a Python library for Data-Driven Reachability Analysis. The main goal of the package is to accelerate the process of computing estimates of forward reachable sets for nonlinear dy

2 Nov 08, 2021
Spectral Analysis in Python

SPECTRUM : Spectral Analysis in Python contributions: Please join https://github.com/cokelaer/spectrum contributors: https://github.com/cokelaer/spect

Thomas Cokelaer 280 Dec 16, 2022
WaveFake: A Data Set to Facilitate Audio DeepFake Detection

WaveFake: A Data Set to Facilitate Audio DeepFake Detection This is the code repository for our NeurIPS 2021 (Track on Datasets and Benchmarks) paper

Chair for Sys­tems Se­cu­ri­ty 27 Dec 22, 2022
Udacity - Data Analyst Nanodegree - Project 4 - Wrangle and Analyze Data

WeRateDogs Twitter Data from 2015 to 2017 Udacity - Data Analyst Nanodegree - Project 4 - Wrangle and Analyze Data Table of Contents Introduction Proj

Keenan Cooper 1 Jan 12, 2022
First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we want to understand column level lineage and automate impact analysis.

dbt-osmosis First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we wan

Alexander Butler 150 Jan 06, 2023
Flexible HDF5 saving/loading and other data science tools from the University of Chicago

deepdish Flexible HDF5 saving/loading and other data science tools from the University of Chicago. This repository also host a Deep Learning blog: htt

UChicago - Department of Computer Science 255 Dec 10, 2022
The lastest all in one bombing tool coded in python uses tbomb api

BaapG-Attack is a python3 based script which is officially made for linux based distro . It is inbuit mass bomber with sms, mail, calls and many more bombing

59 Dec 25, 2022
Monitor the stability of a pandas or spark dataframe ⚙︎

Population Shift Monitoring popmon is a package that allows one to check the stability of a dataset. popmon works with both pandas and spark datasets.

ING Bank 403 Dec 07, 2022
An ETL Pipeline of a large data set from a fictitious music streaming service named Sparkify.

An ETL Pipeline of a large data set from a fictitious music streaming service named Sparkify. The ETL process flows from AWS's S3 into staging tables in AWS Redshift.

1 Feb 11, 2022
Investigating EV charging data

Investigating EV charging data Introduction: Got an opportunity to work with a home monitoring technology company over the last 6 months whose goal wa

Yash 2 Apr 07, 2022
collect training and calibration data for gaze tracking

Collect Training and Calibration Data for Gaze Tracking This tool allows collecting gaze data necessary for personal calibration or training of eye-tr

Pascal 5 Dec 17, 2022
PySpark Structured Streaming ROS Kafka ApacheSpark Cassandra

PySpark-Structured-Streaming-ROS-Kafka-ApacheSpark-Cassandra The purpose of this project is to demonstrate a structured streaming pipeline with Apache

Zekeriyya Demirci 5 Nov 13, 2022
Handle, manipulate, and convert data with units in Python

unyt A package for handling numpy arrays with units. Often writing code that deals with data that has units can be confusing. A function might return

The yt project 304 Jan 02, 2023
Building house price data pipelines with Apache Beam and Spark on GCP

This project contains the process from building a web crawler to extract the raw data of house price to create ETL pipelines using Google Could Platform services.

1 Nov 22, 2021
Big Data & Cloud Computing for Oceanography

DS2 Class 2022, Big Data & Cloud Computing for Oceanography Home of the 2022 ISblue Big Data & Cloud Computing for Oceanography class (IMT-A, ENSTA, I

Ocean's Big Data Mining 5 Mar 19, 2022
MapReader: A computer vision pipeline for the semantic exploration of maps at scale

MapReader A computer vision pipeline for the semantic exploration of maps at scale MapReader is an end-to-end computer vision (CV) pipeline designed b

Living with Machines 25 Dec 26, 2022
An easy-to-use feature store

A feature store is a data storage system for data science and machine-learning. It can store raw data and also transformed features, which can be fed straight into an ML model or training script.

ByteHub AI 48 Dec 09, 2022