To effectively detect the faulty wafers

Overview

wafer_fault_detection

Aim of the project:

In electronics, a wafer (also called a slice or substrate) is a thin slice of semiconductor, such as crystalline silicon (c-Si), used for the fabrication of integrated circuits and, in photovoltaics, to manufacture solar cells. The wafer serves as the substrate for microelectronic devices built in and upon the wafer. The project aims to successfully identify the state of the provided wafer by classifying it between one of the two-class +1 (good, can be used as a substrate) or -1 (bad, the substrate need to be replaced). In this regard, a training dataset is provided to build a machine learning classification model, which can predict the wafer quality.

Data Description:

The columns of provided data can be classified into 3 parts: wafer name, sensor values and label. The wafer name contains the batch number of the wafer, whereas the sensor values obtained from the measurement carried out on the wafer. The label column contains two unique values +1 and -1 that identifies if the wafer is good or need to be replaced. Additionally, we also require a schema file, which contains all the relevant information about the training files such as file names, length of date value in the file name, length of time value in the file name, number of columns, name of the columns, and their datatype.

Directory creation:

All the necessary folders were created to effectively separate the files so that the end-user can get easy access to them.

Data Validation:

In this step, we matched our dataset with the provided schema file to match the file names, the number of columns it should contain, their names as well as their datatype. If the files matched with the schema values, then it is considered a good file on which we can train or predict our model, if not then the files are considered as bad and moved to the bad folder. Moreover, we also identify the columns with null values. If the whole column data is missing then we also consider the file as bad, on the contrary, if only a fraction of data in a column is missing then we initially fill it with NaN and consider it as good data.

Data Insertion in Database:

First, we create a database with the given name passed. If the database is already created, open the connection to the database. A table with the name- "train_good_raw_dt" or "pred_good_raw_dt" is created in the database, based on training or prediction, for inserting the good data files obtained from the data validation step. If the table is already present, then the new table is not created, and new files are inserted in the already present table as we want training to be done on new as well as old training files. In the end, the data in a stored database is exported as a CSV file to be used for model training.

Data Pre-processing and Model Training:

In the training section, first, the data is checked for the NaN values in the columns. If present, impute the NaN values using the KNN imputer. The column with zero standard deviation was also identified and removed as they don't give any information during model training. A prediction schema was created based on the remained dataset columns. Afterwards, the KMeans algorithm is used to create clusters in the pre-processed data. The optimum number of clusters is selected by plotting the elbow plot, and for the dynamic selection of the number of clusters, we are using the "KneeLocator" function. The idea behind clustering is to implement different algorithms to train data in different clusters. The Kmeans model is trained over pre-processed data and the model is saved for further use in prediction. After clusters are created, we find the best model for each cluster. We are using four algorithms, "Random Forest" “K Neighbours”, “Logistic Regression” and "XGBoost". For each cluster, both the algorithms are passed with the best parameters derived from GridSearch. We calculate the AUC scores for both models and select the model with the best score. Similarly, the best model is selected for each cluster. All the models for every cluster are saved for use in prediction. In the end, the confusion matrix of the model associated with every cluster is also saved to give a glance at the performance of the models.

Prediction:

In data prediction, first, the essential directories are created. The data validation, data insertion and data processing steps are similar to the training section. The KMeans model created during training is loaded, and clusters for the pre-processed prediction data is predicted. Based on the cluster number, the respective model is loaded and is used to predict the data for that cluster. Once the prediction is made for all the clusters, the predictions along with the Wafer names are saved in a CSV file at a given location.

Deployment:

We will be deploying the model to Heroku Cloud.

Owner
Arun Singh Babal
Engineer | Data Science Enthusiasts | Machine Learning | Deep Learning | Advanced Computer Vision.
Arun Singh Babal
Python scripts to interact with Upper Deck ePack online trading card platform

This script should connect to the Upper Deck ePack API using your browser cookies and download a list of your current collection and save it as a CSV.

Adrian Kent 1 Nov 22, 2021
Repositorio com arquivos processados da CPI da COVID para facilitar analise

cpi4all Repositorio com arquivos processados da CPI da COVID para facilitar analise Organização No site do senado é possivel encontrar a lista de todo

Breno Rodrigues Guimarães 12 Aug 16, 2021
Let's make a lot of random function from Scracth...

Pseudo-Random On a whim I asked myself the question about how randomness is integrated into an algorithm? So I started the adventure by trying to code

Yacine 2 Jan 19, 2022
A general illumination correction method for optical microscopy.

CIDRE About CIDRE is a retrospective illumination correction method for optical microscopy. It is designed to correct collections of images by buildin

Kevin Smith 31 Sep 07, 2022
Ant Colony Optimization for Traveling Salesman Problem

tsp-aco Ant Colony Optimization for Traveling Salesman Problem Dependencies Python 3.8 tqdm numpy matplotlib To run the solver run main.py from the p

Baha Eren YALDIZ 4 Feb 03, 2022
An application for automation of the mining function in the game Alienworlds.IO

alienautomation A Python script made to automate the tidious job of mining on AlienWorlds This script: Automatically opens the browser Automatically l

anonieXdev 42 Dec 03, 2022
A redesign of our previous Python World Cup, aiming to simulate the 2022 World Cup all the way from the qualifiers

A redesign of our previous Python World Cup, aiming to simulate the 2022 World Cup all the way from the qualifiers. This new version is designed to be more compact and more efficient and will reflect

Sam Counsell 1 Jan 07, 2022
Includes Chapters for Python Crash Course session.

python-crash-course Includes Chapters for Python Crash Course session. What will you learn: Python Essentials Creating Server Writing REST API Writing

Vineet Rao 3 Feb 17, 2021
Think DSP: Digital Signal Processing in Python, by Allen B. Downey.

ThinkDSP LaTeX source and Python code for Think DSP: Digital Signal Processing in Python, by Allen B. Downey. The premise of this book (and the other

Allen Downey 3.2k Jan 08, 2023
Automated GitHub profile content using the USGS API, Plotly and GitHub Actions.

Top 20 Largest Earthquakes in the Past 24 Hours Location Mag Date and Time (UTC) 92 km SW of Sechura, Peru 5.2 11-05-2021 23:19:50 113 km NNE of Lobuj

Mr. Phantom 28 Oct 31, 2022
An electron application to check battery of bluetooth devices connected to linux devices.

bluetooth-battery-electron An electron application to check battery of bluetooth devices connected to linux devices. This project provides an electron

Vasu Sharma 15 Dec 03, 2022
DD监控室第一版

DD监控室 运行指南

执明神君 1.2k Dec 31, 2022
JimShapedCoding Python Crash Course 2021

Python CRASH Course by JimShapedCoding - Click Here to Start! This Repository includes the code and MORE exercises on each section of the entire cours

Jim Erg 64 Dec 23, 2022
Subnet calculator script using python

subnetCalculator Subnet calculator script using python3 Interactive Version Define the subnet variable interactively Use: subnetDict = subnetCalculato

1 Feb 15, 2022
Some scripts for the Reverse engineered (old) api of CafeBazaar

bazz Note: This project is done and published only for educational purposes. Some scripts for the Reverse engineered (old) API of CafeBazaar. Be aware

Mohsen Tahmasebi 35 Dec 25, 2022
An app about keyboards, originating from the design of u/Sonnenschirm

keebapp-backend An app about keyboards, originating from the design of u/Sonnenschirm Setup Firstly, ensure that the environment for python is install

8 Sep 04, 2022
Bitflip Fault Simulation Platform by Daniele Rizzieri (2021)

BFSP [v1.05] Bitflip Fault Simulation Platform by Daniele Rizzieri (2021) The platform injects a random bitflip in each of N copies of a binary file.

Daniele Rizzieri 2 Nov 05, 2022
This is a simple python script for checking A/L Examination results of srilankan students

AL-Result-Checker This is a simple python script for checking A/L Examination results of srilankan students INSTALLATION [Termux] [Linux] : apt-get up

Razor Kenway 8 Oct 24, 2022
The goal of this program was to find the most common color in my living room.

The goal of this program was to find the most common color in my living room. I found a dataset online with colors names and their corr

1 Nov 09, 2021
This python module allows to extract data from the RAW-file-format produces by devices from Thermo Fisher Scientific.

fisher_py This Python module allows access to Thermo Orbitrap raw mass spectrometer files. Using this library makes it possible to automate the analys

8 Oct 14, 2022