To effectively detect the faulty wafers

Overview

wafer_fault_detection

Aim of the project:

In electronics, a wafer (also called a slice or substrate) is a thin slice of semiconductor, such as crystalline silicon (c-Si), used for the fabrication of integrated circuits and, in photovoltaics, to manufacture solar cells. The wafer serves as the substrate for microelectronic devices built in and upon the wafer. The project aims to successfully identify the state of the provided wafer by classifying it between one of the two-class +1 (good, can be used as a substrate) or -1 (bad, the substrate need to be replaced). In this regard, a training dataset is provided to build a machine learning classification model, which can predict the wafer quality.

Data Description:

The columns of provided data can be classified into 3 parts: wafer name, sensor values and label. The wafer name contains the batch number of the wafer, whereas the sensor values obtained from the measurement carried out on the wafer. The label column contains two unique values +1 and -1 that identifies if the wafer is good or need to be replaced. Additionally, we also require a schema file, which contains all the relevant information about the training files such as file names, length of date value in the file name, length of time value in the file name, number of columns, name of the columns, and their datatype.

Directory creation:

All the necessary folders were created to effectively separate the files so that the end-user can get easy access to them.

Data Validation:

In this step, we matched our dataset with the provided schema file to match the file names, the number of columns it should contain, their names as well as their datatype. If the files matched with the schema values, then it is considered a good file on which we can train or predict our model, if not then the files are considered as bad and moved to the bad folder. Moreover, we also identify the columns with null values. If the whole column data is missing then we also consider the file as bad, on the contrary, if only a fraction of data in a column is missing then we initially fill it with NaN and consider it as good data.

Data Insertion in Database:

First, we create a database with the given name passed. If the database is already created, open the connection to the database. A table with the name- "train_good_raw_dt" or "pred_good_raw_dt" is created in the database, based on training or prediction, for inserting the good data files obtained from the data validation step. If the table is already present, then the new table is not created, and new files are inserted in the already present table as we want training to be done on new as well as old training files. In the end, the data in a stored database is exported as a CSV file to be used for model training.

Data Pre-processing and Model Training:

In the training section, first, the data is checked for the NaN values in the columns. If present, impute the NaN values using the KNN imputer. The column with zero standard deviation was also identified and removed as they don't give any information during model training. A prediction schema was created based on the remained dataset columns. Afterwards, the KMeans algorithm is used to create clusters in the pre-processed data. The optimum number of clusters is selected by plotting the elbow plot, and for the dynamic selection of the number of clusters, we are using the "KneeLocator" function. The idea behind clustering is to implement different algorithms to train data in different clusters. The Kmeans model is trained over pre-processed data and the model is saved for further use in prediction. After clusters are created, we find the best model for each cluster. We are using four algorithms, "Random Forest" “K Neighbours”, “Logistic Regression” and "XGBoost". For each cluster, both the algorithms are passed with the best parameters derived from GridSearch. We calculate the AUC scores for both models and select the model with the best score. Similarly, the best model is selected for each cluster. All the models for every cluster are saved for use in prediction. In the end, the confusion matrix of the model associated with every cluster is also saved to give a glance at the performance of the models.

Prediction:

In data prediction, first, the essential directories are created. The data validation, data insertion and data processing steps are similar to the training section. The KMeans model created during training is loaded, and clusters for the pre-processed prediction data is predicted. Based on the cluster number, the respective model is loaded and is used to predict the data for that cluster. Once the prediction is made for all the clusters, the predictions along with the Wafer names are saved in a CSV file at a given location.

Deployment:

We will be deploying the model to Heroku Cloud.

Owner
Arun Singh Babal
Engineer | Data Science Enthusiasts | Machine Learning | Deep Learning | Advanced Computer Vision.
Arun Singh Babal
Small scripts to learn about GNOME internals

gnome-hacks This is a collection of APIs that allow programmatic manipulation of the GNOME shell. If you use GNOME (the default graphical shell in Ubu

Alex Nichol 5 Oct 22, 2021
A PDM plugin to publish to PyPI

PDM Publish A PDM plugin to publish to PyPI NOTE: Consider if you need this over using twine directly Installation If you installed pdm via pipx: pipx

Branch Vincent 20 Aug 06, 2022
Analysis of ROM image for Norsk Data VDU 301 S

This repository is meant to analyze the ROM images from Norsk Data VDU 301 S as provided at by Torfinn. To combine the two ROM image halves and extrac

Sebastian Rasmussen 1 Oct 21, 2021
A web application which you can search, buy or sell shares with current prices which provided by IEX.

CS50 - Stock Exchange A web application which you can search, buy or sell shares with current prices which provided by IEX. Table of Contents Setup St

1 May 28, 2022
A website to collect vintage 4 tracks cassette recorders.

Vintage 4tk cassette recorders A website to collect vintage 4 tracks cassette recorders. Local development setup Copy and customize Django settings (e

1 May 01, 2022
My solutions for Advent of Code 2021 🌟🎄

🌟 Advent of Code 2021 🎄 My solutions for Advent of Code 2021. About · What is Advent of Code? · Contents · Usage · Table of puzzles (TODO: add final

Amanda P. Pinha 2 Dec 05, 2022
Repo created for the purpose of adding any kind of programs and projects

Programs and Project Repository A repository for adding programs and projects of any kind starting from beginners level to expert ones Contributing to

Unicorn Dev Community 3 Nov 02, 2022
Check is a integer is even

Is Even Check if interger is even using isevenapi. https://isevenapi.xyz/ Main features: cache memoization api retry handler hide ads Install pip inst

Rosiney Gomes Pereira 45 Dec 19, 2022
An advanced NFT Generator

NFT Generator An advanced NFT Generator Free software: GNU General Public License v3 Documentation: https://nft-generator.readthedocs.io. Features TOD

NFT Generator 5 Apr 21, 2022
A tool to improve Boolean satisfiability (SAT) solver user's life

SatHelper This is a tool to improve the Boolean satisfiability (SAT) and MaxSAT solver user's life. It helps you model various problems as SAT and Max

Tomas Balyo 1 Nov 16, 2021
Tools for dos (denial-of-service) website / web server

DoS Attack Tools Tools for dos (denial-of-service) website / web server di buat olah NurvySec How to install on debian / ubuntu $ apt update $ apt ins

nurvy 1 Feb 10, 2022
Blender addon that simplifies access to useful operators and adds missing functionality

Quick Menu is a Blender addon that simplifies common tasks Compatible with Blender 3.x.x Install through Edit - Preferences - Addons - Install... -

passivestar 94 Dec 27, 2022
Create beautiful diagrams just by typing mathematical notation in plain text.

Penrose Penrose is an early-stage system that is still in development. Our system is not ready for contributions or public use yet, but hopefully will

Penrose 5.6k Jan 08, 2023
An Airflow operator to call the main function from the dbt-core Python package

airflow-dbt-python An Airflow operator to call the main function from the dbt-core Python package Motivation Airflow running in a managed environment

Tomás Farías Santana 93 Jan 08, 2023
Possible solutions to Wordscapes, a mobile game for the android operating system, downloadable from the play store

Possible solutions to Wordscapes, a mobile game for the android operating system, downloadable from the play store

Clifford Onyonka 2 Feb 23, 2022
EloGGs 🎮 is a 1v1.LOL Trophy Boosting Program (PATCHED)

EloGGs 🎮 is an old patched 1v1.LOL boosting program I developed months ago, My team made around $1000 total off of this, but now it's been patched by the developers.

doop 1 Jul 22, 2022
Flight Reservation App With Python

Flight Reservation App With Python

victor-h. 1 Nov 21, 2021
A very simple boarding app with DRF

CRUD project with DRF A very simple boarding app with DRF. About The Project 유저 정보를 갖고 게시판을 다루는 프로젝트 입니다. Version Python: 3.9 DB: PostgreSQL 13 Django

1 Nov 13, 2021
A Pythonic Data Catalog powered by Ray that brings exabyte-level scalability and fast, ACID-compliant, change-data-capture to your big data workloads.

DeltaCAT DeltaCAT is a Pythonic Data Catalog powered by Ray. Its data storage model allows you to define and manage fast, scalable, ACID-compliant dat

45 Oct 15, 2022
This project intends to take the user's CEP (brazilian adress code) and return the local in which the CEP is placed.

This project aims to simply return the CEP's (the brazilian resident adress code) User of the application. The project uses a request and passes on to

Daniel Soares Saldanha 4 Nov 17, 2021