To effectively detect the faulty wafers

Overview

wafer_fault_detection

Aim of the project:

In electronics, a wafer (also called a slice or substrate) is a thin slice of semiconductor, such as crystalline silicon (c-Si), used for the fabrication of integrated circuits and, in photovoltaics, to manufacture solar cells. The wafer serves as the substrate for microelectronic devices built in and upon the wafer. The project aims to successfully identify the state of the provided wafer by classifying it between one of the two-class +1 (good, can be used as a substrate) or -1 (bad, the substrate need to be replaced). In this regard, a training dataset is provided to build a machine learning classification model, which can predict the wafer quality.

Data Description:

The columns of provided data can be classified into 3 parts: wafer name, sensor values and label. The wafer name contains the batch number of the wafer, whereas the sensor values obtained from the measurement carried out on the wafer. The label column contains two unique values +1 and -1 that identifies if the wafer is good or need to be replaced. Additionally, we also require a schema file, which contains all the relevant information about the training files such as file names, length of date value in the file name, length of time value in the file name, number of columns, name of the columns, and their datatype.

Directory creation:

All the necessary folders were created to effectively separate the files so that the end-user can get easy access to them.

Data Validation:

In this step, we matched our dataset with the provided schema file to match the file names, the number of columns it should contain, their names as well as their datatype. If the files matched with the schema values, then it is considered a good file on which we can train or predict our model, if not then the files are considered as bad and moved to the bad folder. Moreover, we also identify the columns with null values. If the whole column data is missing then we also consider the file as bad, on the contrary, if only a fraction of data in a column is missing then we initially fill it with NaN and consider it as good data.

Data Insertion in Database:

First, we create a database with the given name passed. If the database is already created, open the connection to the database. A table with the name- "train_good_raw_dt" or "pred_good_raw_dt" is created in the database, based on training or prediction, for inserting the good data files obtained from the data validation step. If the table is already present, then the new table is not created, and new files are inserted in the already present table as we want training to be done on new as well as old training files. In the end, the data in a stored database is exported as a CSV file to be used for model training.

Data Pre-processing and Model Training:

In the training section, first, the data is checked for the NaN values in the columns. If present, impute the NaN values using the KNN imputer. The column with zero standard deviation was also identified and removed as they don't give any information during model training. A prediction schema was created based on the remained dataset columns. Afterwards, the KMeans algorithm is used to create clusters in the pre-processed data. The optimum number of clusters is selected by plotting the elbow plot, and for the dynamic selection of the number of clusters, we are using the "KneeLocator" function. The idea behind clustering is to implement different algorithms to train data in different clusters. The Kmeans model is trained over pre-processed data and the model is saved for further use in prediction. After clusters are created, we find the best model for each cluster. We are using four algorithms, "Random Forest" “K Neighbours”, “Logistic Regression” and "XGBoost". For each cluster, both the algorithms are passed with the best parameters derived from GridSearch. We calculate the AUC scores for both models and select the model with the best score. Similarly, the best model is selected for each cluster. All the models for every cluster are saved for use in prediction. In the end, the confusion matrix of the model associated with every cluster is also saved to give a glance at the performance of the models.

Prediction:

In data prediction, first, the essential directories are created. The data validation, data insertion and data processing steps are similar to the training section. The KMeans model created during training is loaded, and clusters for the pre-processed prediction data is predicted. Based on the cluster number, the respective model is loaded and is used to predict the data for that cluster. Once the prediction is made for all the clusters, the predictions along with the Wafer names are saved in a CSV file at a given location.

Deployment:

We will be deploying the model to Heroku Cloud.

Owner
Arun Singh Babal
Engineer | Data Science Enthusiasts | Machine Learning | Deep Learning | Advanced Computer Vision.
Arun Singh Babal
In this repo i inherit the pos module and added QR code to pos receipt

odoo-pos-inherit In this repo i inherit the pos module and added QR code to pos receipt 1- Create new Odoo Module using command line $ python odoo-bin

5 Apr 09, 2022
Send notifications created in Frappe or ERPNext as push notication via Firebase Cloud Message(FCM)

FCM Notification for ERPNext Send notifications created in Frappe or ERPNext as push notication via Firebase Cloud Message(FCM) Steps to use the app:

Tridz 9 Nov 14, 2022
Sentiment Based Product Recommendation System

Sentiment Based Product Recommendation System The e-commerce business is quite p

Sumit Sahay 2 Jan 15, 2022
Generate Openbox Menus from a easy to write configuration file.

openbox-menu-generator Generate Openbox Menus from a easy to write configuration file. Example Configuration: ('#' indicate comments but not implement

3 Jul 14, 2022
Given an array of integers, calculate the ratios of its elements that are positive, negative, and zero.

Given an array of integers, calculate the ratios of its elements that are positive, negative, and zero. Print the decimal value of each fraction on a new line with places after the decimal.

Shruti Dhave 2 Nov 29, 2021
ColabFold / AlphaFold2_advanced on your local PC (or macOS)

LocalColabFold ColabFold / AlphaFold2_advanced on your local PC (or macOS) Installation For Linux Make sure curl and wget commands are already install

Yoshitaka Moriwaki 207 Dec 22, 2022
VacationCycleLogicBackEnd - Vacation Cycle Logic BackEnd With Python

Vacation Cycle Logic BackEnd Getting Started Existing virtualenv If your project

Mohamed Gamal 0 Jan 03, 2022
A collection of useful functions for writers to analyze text/stories.

AuthorTools AuthorTools provides a multitude of functions for easily analyzing (your?) writing. AuthorTools is made especially for creative writers wi

1 Jan 14, 2022
Msgpack serialization/deserialization library for Python, written in Rust using PyO3 and rust-msgpack. Reboot of orjson. msgpack.org[Python]

ormsgpack ormsgpack is a fast msgpack library for Python. It is a fork/reboot of orjson It serializes faster than msgpack-python and deserializes a bi

Aviram Hassan 139 Dec 30, 2022
The most hackable keyboard in all the land

MiRage Modular Keyboard © 2021 Zack Freedman of Voidstar Lab Licensed Creative Commons 4.0 Attribution Noncommercial Share-Alike The MiRage is a 60% o

Zack Freedman 558 Dec 30, 2022
北大选课网2021年春季验证码识别

北大选课网验证码识别 2021 年春季学期 Powered by Elector Quartet (@Rabbit, @xmcp, @SpiritedAwayCN, @gzz) 数据集描述 最初的数据集为 5130 张人工标记的验证码,之后利用早期训练好的模型在选课网上进行自动验证 (自举),又收集

Rabbit 27 Sep 17, 2022
An Airflow operator to call the main function from the dbt-core Python package

airflow-dbt-python An Airflow operator to call the main function from the dbt-core Python package Motivation Airflow running in a managed environment

Tomás Farías Santana 93 Jan 08, 2023
Automated rop chain generation

This is the accompanying code to the blog post talking about automated rop chain generation. Build the test file with: make Install the dependencies:

Christopher Roberts 14 Nov 22, 2022
This is some simple code to scrape vistbook's system to get an overview of the different cabins availability.

DNT_cabin_availability_system This is some simple code to scrape visbook's system to get an overview of the different cabins availability. The system

Andreas Lorentzen 1 Sep 25, 2022
A collection of UIKit components that can be used as a Wagtail StreamField block.

Wagtail UIKit Blocks A collection of UIKit components that can be used as a Wagtail StreamField block. Available UIKit components Container Grid Headi

Krishna Prasad K 13 Dec 15, 2022
Covid-ml-predictors - COVID predictions using AI.

COVID Predictions This repo contains ML models to be trained on COVID-19 data from the UK, sourced off of Kaggle here. This uses many different ML mod

1 Jan 09, 2022
A Trace Explorer for Reverse Engineers

Tenet - A Trace Explorer for Reverse Engineers Overview Tenet is an IDA Pro plugin for exploring execution traces. The goal of this plugin is to provi

1k Jan 02, 2023
DNA Storage Simulator that analyzes and simulates DNA storage

DNA Storage Simulator This monorepository contains code for a research project by Mayank Keoliya and supervised by Djordje Jevdjic, that analyzes and

Mayank Keoliya 3 Sep 25, 2022
Org agenda in the console

This Python script reads an org agenda file (i.e. a regular org file with some active dates) and displays an interactive and colored year calendar with detailed information for each day when the mous

Nicolas P. Rougier 113 Jan 03, 2023
A general illumination correction method for optical microscopy.

CIDRE About CIDRE is a retrospective illumination correction method for optical microscopy. It is designed to correct collections of images by buildin

Kevin Smith 31 Sep 07, 2022