To effectively detect the faulty wafers

Overview

wafer_fault_detection

Aim of the project:

In electronics, a wafer (also called a slice or substrate) is a thin slice of semiconductor, such as crystalline silicon (c-Si), used for the fabrication of integrated circuits and, in photovoltaics, to manufacture solar cells. The wafer serves as the substrate for microelectronic devices built in and upon the wafer. The project aims to successfully identify the state of the provided wafer by classifying it between one of the two-class +1 (good, can be used as a substrate) or -1 (bad, the substrate need to be replaced). In this regard, a training dataset is provided to build a machine learning classification model, which can predict the wafer quality.

Data Description:

The columns of provided data can be classified into 3 parts: wafer name, sensor values and label. The wafer name contains the batch number of the wafer, whereas the sensor values obtained from the measurement carried out on the wafer. The label column contains two unique values +1 and -1 that identifies if the wafer is good or need to be replaced. Additionally, we also require a schema file, which contains all the relevant information about the training files such as file names, length of date value in the file name, length of time value in the file name, number of columns, name of the columns, and their datatype.

Directory creation:

All the necessary folders were created to effectively separate the files so that the end-user can get easy access to them.

Data Validation:

In this step, we matched our dataset with the provided schema file to match the file names, the number of columns it should contain, their names as well as their datatype. If the files matched with the schema values, then it is considered a good file on which we can train or predict our model, if not then the files are considered as bad and moved to the bad folder. Moreover, we also identify the columns with null values. If the whole column data is missing then we also consider the file as bad, on the contrary, if only a fraction of data in a column is missing then we initially fill it with NaN and consider it as good data.

Data Insertion in Database:

First, we create a database with the given name passed. If the database is already created, open the connection to the database. A table with the name- "train_good_raw_dt" or "pred_good_raw_dt" is created in the database, based on training or prediction, for inserting the good data files obtained from the data validation step. If the table is already present, then the new table is not created, and new files are inserted in the already present table as we want training to be done on new as well as old training files. In the end, the data in a stored database is exported as a CSV file to be used for model training.

Data Pre-processing and Model Training:

In the training section, first, the data is checked for the NaN values in the columns. If present, impute the NaN values using the KNN imputer. The column with zero standard deviation was also identified and removed as they don't give any information during model training. A prediction schema was created based on the remained dataset columns. Afterwards, the KMeans algorithm is used to create clusters in the pre-processed data. The optimum number of clusters is selected by plotting the elbow plot, and for the dynamic selection of the number of clusters, we are using the "KneeLocator" function. The idea behind clustering is to implement different algorithms to train data in different clusters. The Kmeans model is trained over pre-processed data and the model is saved for further use in prediction. After clusters are created, we find the best model for each cluster. We are using four algorithms, "Random Forest" “K Neighbours”, “Logistic Regression” and "XGBoost". For each cluster, both the algorithms are passed with the best parameters derived from GridSearch. We calculate the AUC scores for both models and select the model with the best score. Similarly, the best model is selected for each cluster. All the models for every cluster are saved for use in prediction. In the end, the confusion matrix of the model associated with every cluster is also saved to give a glance at the performance of the models.

Prediction:

In data prediction, first, the essential directories are created. The data validation, data insertion and data processing steps are similar to the training section. The KMeans model created during training is loaded, and clusters for the pre-processed prediction data is predicted. Based on the cluster number, the respective model is loaded and is used to predict the data for that cluster. Once the prediction is made for all the clusters, the predictions along with the Wafer names are saved in a CSV file at a given location.

Deployment:

We will be deploying the model to Heroku Cloud.

Owner
Arun Singh Babal
Engineer | Data Science Enthusiasts | Machine Learning | Deep Learning | Advanced Computer Vision.
Arun Singh Babal
Provide Prometheus url_sd compatible API Endpoint with data from Netbox

netbox-plugin-prometheus-sd Provide Prometheus http_sd compatible API Endpoint with data from Netbox. HTTP SD is a new feature in Prometheus and not a

Felix Peters 66 Dec 19, 2022
Free and open source qualitative research tool

Taguette A spin on the phrase "tag it!", Taguette is a free and open source qualitative research tool that allows users to: Import PDFs, Word Docs (.d

Remi Rampin 48 Jan 02, 2023
WildHack 2021 solution by Nuclear Foxes team (public version).

WildHack 2021 Nuclear Foxes Team This repo contains our project for the Wildberries Hackathon 2021. Task 2: Searching tags Implement an algorithm of r

Sergey Zakharov 1 Apr 18, 2022
FollowSpot is a comprehensive audition tracking fullstack web application for entertainment industry professionals.

FollowSpot is a comprehensive audition tracking fullstack web application for entertainment industry professionals. This app allows users to store information/media for all of their auditions while a

Jen Brissman 9 Jul 12, 2022
A clock widget for linux ez to use no need for cmd line ;)

A clock widget in LINUX A clock widget for linux ez to use no need for cmd line ;) How to install? oh its ez just go to realese! what are the paltform

1 Feb 15, 2022
Type Persian without confusing words for yourself and others, in Adobe Connect

About In the Adobe Connect chat section, to type in Persian or Arabic, the written words will be confused and will be written and sent illegibly (This

Matin Najafi 23 Nov 26, 2021
Tesla App Update Differences Extractor

Tesla App Update Differences Extractor Python program that finds the differences between two versions of the Tesla App. When Tesla updates the app a l

Adrian 5 Apr 11, 2022
Spooky Castle Project

Spooky Castle Project Here is a repository where I have placed a few workflow scripts that could be used to automate the blender to godot sprite pipel

3 Jan 17, 2022
This is a survey of python's async concurrency features by example.

Survey of Python's Async Features This is a survey of python's async concurrency features by example. The purpose of this survey is to demonstrate tha

Tyler Lovely 4 Feb 10, 2022
WhyNotWin11 - Detection Script to help identify why your PC isn't Windows 11 Release Ready

WhyNotWin11 - Detection Script to help identify why your PC isn't Windows 11 Release Ready

Robert C. Maehl 5.9k Dec 31, 2022
Home Assistant integration for spanish electrical data providers (e.g., datadis)

homeassistant-edata Esta integración para Home Assistant te permite seguir de un vistazo tus consumos y máximas potencias alcanzadas. Para ello, se ap

VMG 163 Jan 05, 2023
Imports an object based on a string import_string('package.module:function_name')() - Based on werkzeug.utils

DEPRECATED don't use it. Please do: import importlib foopath = 'src.apis.foo.Foo' module_name = '.'.join(foopath.split('.')[:-1]) # to get src.apis.f

Bruno Rocha Archived Projects 11 Nov 12, 2022
A type based dependency injection framework for Python 3.9+

Alluka A type based dependency injection framework for Python 3.9+. Installation You can install Alluka from PyPI using the following command in any P

Lucina 16 Dec 15, 2022
Grimoire is a Python library for creating interactive fiction as hyperlinked html.

Grimoire Grimoire is a Python library for creating interactive fiction as hyperlinked html. Installation pip install grimoire-if Usage Check out the

Scott Russell 5 Oct 11, 2022
A calculator to test numbers against the collatz conjecture

The Collatz Calculator This is an algorithm custom built by Kyle Dickey, used to test numbers against the simple rules of the Collatz Conjecture.

Kyle Dickey 2 Jun 14, 2022
Python 101 Forever

🚀 Python 101 Forever 🚀 Official Python 101 Forever GitHub repository. START HERE - CHECK README SUBSCRIBE FOR UPDATES HERE Sponsors Contac

Hack Bulgaria 58 Nov 30, 2022
little proyect to organize myself, but maybe can help someone else

TaskXT 0.1 Little proyect to organize myself, but maybe can help someone else Idea The main idea is to ogranize you work and stuff to do, but with onl

Gabriel Carmona 4 Oct 03, 2021
8 Nov 04, 2022
banking system with python, beginner friendly, preadvanced level

banking-system-python banking system with python, beginner friendly, preadvanced level Used topics Functions else/if/elif dicts methods parameters hol

Razi Falah 1 Feb 03, 2022
Software for visualization of RTStruct structures on CT images

This script is responsible for the operation of the program, it is responsible for both creating the GUI and the process of processing images from dicom files. The program is based on the use of the

Adam Piszczek 0 Jun 29, 2022