Identifies the faulty wafer before it can be used for the fabrication of integrated circuits and, in photovoltaics, to manufacture solar cells.

Overview

Retrainable-Faulty-Wafer-Detector

Aim of the project:

In electronics, a wafer (also called a slice or substrate) is a thin slice of semiconductor, such as crystalline silicon (c-Si), used for the fabrication of integrated circuits and, in photovoltaics, to manufacture solar cells. The wafer serves as the substrate for microelectronic devices built in and upon the wafer. The project aims to successfully identify the state of the provided wafer by classifying it between one of the two-class +1 (good, can be used as a substrate) or -1 (bad, the substrate need to be replaced) and then train the model on this data so that it can continuously update itself with the environment and become more generalized with time. In this regard, a training and prediction dataset is provided to build a machine learning classification model, which can predict the wafer quality.

Data Description:

The columns of provided data can be classified into 3 parts: wafer name, sensor values and label. The wafer name contains the batch number of the wafer, whereas the sensor values obtained from the measurement carried out on the wafer. The label column contains two unique values +1 and -1 that identifies if the wafer is good or need to be replaced. Additionally, we also require a schema file, which contains all the relevant information about the training files such as file names, length of date value in the file name, length of time value in the file name, number of columns, name of the columns, and their datatype.

Directory creation:

All the necessary folders were created to effectively separate the files so that the end-user can get easy access to them.

Data Validation:

In this step, we matched our dataset with the provided schema file to match the file names, the number of columns it should contain, their names as well as their datatype. If the files matched with the schema values then they are considered good files on which we can train or predict our model, however, if it didn't match then they are moved to the bad folder. Moreover, we also identify the columns with null values. If all the data in a column is missing then the file is also moved to the bad folder. On the contrary, if only a fraction of data in a column is missing then we initially fill it with NaN and considered it as good data.

Data Insertion in Database:

First, open a connection to the database if it exists otherwise create a new one. A table with the name train_good_raw_dt or pred_good_raw_dt is created in the database, based on the training or prediction process, for inserting the good data files obtained from the data validation step. If the table is already present then new files are inserted in that table as we want training to be done on new as well as old training files. In the end, the data in a stored database is exported as a CSV file, to be used for the model training.

Data Pre-processing and Model Training:

In the training section, first, the data is checked for the NaN values in the columns. If present, impute the NaN values using the KNN imputer. The columns with zero standard deviation were also identified and removed as they don't give any information during model training. A prediction schema was created based on the remained dataset columns. Afterwards, the KMeans algorithm is used to create clusters in the pre-processed data. The optimum number of clusters is selected by plotting the elbow plot, and for the dynamic selection of the number of clusters, we are using the "KneeLocator" function. The idea behind clustering is to implement different algorithms to train data in different clusters. The Kmeans model is trained over pre-processed data and the model is saved for further use in prediction. After clusters are created, we find the best model for each cluster. We are using four algorithms, Random Forest, K-Neighbours, Logistic Regression and XGBoost. For each cluster, both the algorithms are passed with the best parameters derived from GridSearch. We calculate the AUC scores for all the models and select the one with the best score. Similarly, the best model is selected for each cluster. For every cluster, the models are saved so that they can be used in future predictions. In the end, the confusion matrix of the model associated with every cluster is also saved to give the a glance over the performance of the models.

Prediction:

In data prediction, first, the essential directories are created. The data validation, data insertion and data processing steps are similar to the training section. The KMeans model created during training is loaded, and clusters for the pre-processed prediction data is predicted. Based on the cluster number, the respective model is loaded and is used to predict the data for that cluster. Once the prediction is made for all the clusters, the predictions along with the Wafer names are saved in a CSV file at a given location.

Retraining:

After the prediction, the prediction data is merged with the previous training dataset and then the models were retrained on this data using the hyperparameter values obtained from the GridSearch. The cycle repeats with every prediction it does and learns from the newly acquired data, making it more robust.

Deployment:

We will be deploying the model to Heroku Cloud.

Owner
Arun Singh Babal
Engineer | Data Science Enthusiasts | Machine Learning | Deep Learning | Advanced Computer Vision.
Arun Singh Babal
Oregon State University grade distributions from Fall 2018 through Summer 2021

Oregon State University Grades Oregon State University grade distributions from Fall 2018 through Summer 2021 obtained through a Freedom Of Informatio

Melanie Gutzmann 5 May 02, 2022
BloodCheck enables Red and Blue Teams to manage multiple Neo4j databases and run Cypher queries against a BloodHound dataset.

BloodCheck BloodCheck enables Red and Blue Teams to manage multiple Neo4j databases and run Cypher queries against a BloodHound dataset. Installation

Mr B0b 16 Nov 05, 2021
A Tool to validate domestic New Zealand vaccine passes

Vaccine Validator Tool to validate domestic New Zealand vaccine passes Create a new virtual environment: python3 -m venv ./venv Activate virtual envi

8 May 01, 2022
Procedurally generated Oblique Strategies for writing your own Oblique Strategies

Procedurally generated Oblique Strategies for writing your own Oblique Strategies.

Gordon Brander 13 Aug 17, 2022
Gerador de dafaces

🎴 DefaceGenerator Obs: esse script foi criado com a intenção de ajudar pessoas iniciantes no hacking que ainda não conseguem criar suas próprias defa

LordShinigami 3 Jan 09, 2022
Different steganography methods with examples and my own small image database

literally-the-most-useless-project [Different steganography methods with examples and my own small image database] This project currently contains thr

Kamyishka 1 Dec 09, 2022
This bot uploads telegram files to MixDrop.co,File.io.

What is about this bot ? This bot uploads telegram files to MixDrop.co, File.io. Usage: Send any file, and the bot will upload it to MixDrop.co, File.

Abhijith NT 3 Feb 26, 2022
Coronavirus Tracker API

Coronavirus Tracker API Provides up-to-date data about Coronavirus outbreak. Includes numbers about confirmed cases, deaths and recovered. Support mul

Francisco Laguna 1 Oct 31, 2020
Freeze your objects in python

gelidum Freeze your objects in python. Latin English Caelum est hieme frigidum et gelidum; myrtos oleas quaeque alia assiduo tepore laetantur, asperna

Diego J. 51 Dec 22, 2022
ΠŸΡ€ΠΎΠ³Ρ€Π°ΠΌΠΌΠ° для практичСской Ρ€Π°Π±ΠΎΡ‚Ρ‹ β„–12 ΠΏΠΎ дисциплинС

Π˜Π½Ρ„ΠΎΡ€ΠΌΠ°Ρ‚ΠΈΠΊΠ°: ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΌΠ° для практичСской Ρ€Π°Π±ΠΎΡ‚Ρ‹ β„–12 Код ΠΈ Π±Π»ΠΎΠΊ-схСма ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΌΡ‹ для практичСской Ρ€Π°Π±ΠΎΡ‚Ρ‹ β„–12 ΠΏΠΎ дисциплинС "Π˜Π½Ρ„ΠΎΡ€ΠΌΠ°Ρ‚ΠΈΠΊΠ°" (I сСмСстр). Π‘ΡƒΡ‚

Vladislav 1 Dec 07, 2021
Automatically skip sponsor segments in YouTube videos playing on Apple TV.

iSponsorBlockTV Skip sponsor segments in YouTube videos playing on an Apple TV. This project is written in asycronous python and should be pretty quic

David 64 Dec 17, 2022
My Analysis of the VC4 Assembly Code from the RPI4

My Analysis of the VC4 Assembly Code from the RPI4

Nicholas Starke 31 Jul 13, 2022
Why write code when you can import it directly from GitHub Copilot?

Copilot Importer Why write code when you can import it directly from GitHub Copilot? What is Copilot Importer? The copilot python module will dynamica

Mythic 41 Jan 04, 2023
Create VSCode Extensions with python

About Create vscode extensions with python. Installation Stable version: pip install vscode-ext Why use this? Why should you use this for building VSc

Swas.py 134 Jan 07, 2023
Urban Big Data Centre Housing Sensor Project

Housing Sensor Project The Urban Big Data Centre is conducting a study of indoor environmental data in Scottish houses. We are using Raspberry Pi devi

Jeremy Singer 2 Dec 13, 2021
Retrieve bank transactions and categorize for budgeting use

Budgeting After trying out some budgeting software, I decided to make my own. selenium_scraper Using the selenium package, this script runs an instanc

Marc 1 Nov 10, 2021
Projeto de anΓ‘lise de dados com SQL

Project-Analizyng-International-Debt-Statistics- Projeto de anΓ‘lise de dados com SQL - Plataforma Data Camp DescriΓ§Γ£o do Projeto : NΓ£o Γ© que nΓ³s human

Lorrayne Silva 1 Feb 01, 2022
Python implementation of the Lox language from Robert Nystrom's Crafting Interpreters

pylox Python implementation of the Lox language from Robert Nystrom's Crafting Interpreters. https://craftinginterpreters.com. This only implements th

David Beazley 37 Dec 28, 2022
Necst-lib - Pure Python tools for NECST

necst-lib Pure Python tools for NECST. Features This library provides: something

NANTEN2 Group 5 Dec 15, 2022
Open source stenotype engine

Plover Bringing stenography to everyone. Homepage Releases Wiki Blog Google Group Discord Chat About Installation Getting help Contributing Donations

Open Steno Project 2k Jan 09, 2023