To effectively detect the faulty wafers

Overview

wafer_fault_detection

Aim of the project:

In electronics, a wafer (also called a slice or substrate) is a thin slice of semiconductor, such as crystalline silicon (c-Si), used for the fabrication of integrated circuits and, in photovoltaics, to manufacture solar cells. The wafer serves as the substrate for microelectronic devices built in and upon the wafer. The project aims to successfully identify the state of the provided wafer by classifying it between one of the two-class +1 (good, can be used as a substrate) or -1 (bad, the substrate need to be replaced). In this regard, a training dataset is provided to build a machine learning classification model, which can predict the wafer quality.

Data Description:

The columns of provided data can be classified into 3 parts: wafer name, sensor values and label. The wafer name contains the batch number of the wafer, whereas the sensor values obtained from the measurement carried out on the wafer. The label column contains two unique values +1 and -1 that identifies if the wafer is good or need to be replaced. Additionally, we also require a schema file, which contains all the relevant information about the training files such as file names, length of date value in the file name, length of time value in the file name, number of columns, name of the columns, and their datatype.

Directory creation:

All the necessary folders were created to effectively separate the files so that the end-user can get easy access to them.

Data Validation:

In this step, we matched our dataset with the provided schema file to match the file names, the number of columns it should contain, their names as well as their datatype. If the files matched with the schema values, then it is considered a good file on which we can train or predict our model, if not then the files are considered as bad and moved to the bad folder. Moreover, we also identify the columns with null values. If the whole column data is missing then we also consider the file as bad, on the contrary, if only a fraction of data in a column is missing then we initially fill it with NaN and consider it as good data.

Data Insertion in Database:

First, we create a database with the given name passed. If the database is already created, open the connection to the database. A table with the name- "train_good_raw_dt" or "pred_good_raw_dt" is created in the database, based on training or prediction, for inserting the good data files obtained from the data validation step. If the table is already present, then the new table is not created, and new files are inserted in the already present table as we want training to be done on new as well as old training files. In the end, the data in a stored database is exported as a CSV file to be used for model training.

Data Pre-processing and Model Training:

In the training section, first, the data is checked for the NaN values in the columns. If present, impute the NaN values using the KNN imputer. The column with zero standard deviation was also identified and removed as they don't give any information during model training. A prediction schema was created based on the remained dataset columns. Afterwards, the KMeans algorithm is used to create clusters in the pre-processed data. The optimum number of clusters is selected by plotting the elbow plot, and for the dynamic selection of the number of clusters, we are using the "KneeLocator" function. The idea behind clustering is to implement different algorithms to train data in different clusters. The Kmeans model is trained over pre-processed data and the model is saved for further use in prediction. After clusters are created, we find the best model for each cluster. We are using four algorithms, "Random Forest" “K Neighbours”, “Logistic Regression” and "XGBoost". For each cluster, both the algorithms are passed with the best parameters derived from GridSearch. We calculate the AUC scores for both models and select the model with the best score. Similarly, the best model is selected for each cluster. All the models for every cluster are saved for use in prediction. In the end, the confusion matrix of the model associated with every cluster is also saved to give a glance at the performance of the models.

Prediction:

In data prediction, first, the essential directories are created. The data validation, data insertion and data processing steps are similar to the training section. The KMeans model created during training is loaded, and clusters for the pre-processed prediction data is predicted. Based on the cluster number, the respective model is loaded and is used to predict the data for that cluster. Once the prediction is made for all the clusters, the predictions along with the Wafer names are saved in a CSV file at a given location.

Deployment:

We will be deploying the model to Heroku Cloud.

Owner
Arun Singh Babal
Engineer | Data Science Enthusiasts | Machine Learning | Deep Learning | Advanced Computer Vision.
Arun Singh Babal
Object-oriented programming (OOP) is a method of structuring a program by bundling related properties and behaviors into individual objects. In this tutorial, you’ll learn the basics of object-oriented programming in Python.

06_Python_Object_Class Introduction 👋 Objected oriented programming as a discipline has gained a universal following among developers. Python, an in-

Milaan Parmar / Милан пармар / _米兰 帕尔马 239 Dec 20, 2022
Anti VirusTotal written in Python.

How it works Most of the anti-viruses on VirusToal uses sandboxes or vms to scan and detect malicious activity. The code checks to see if the devices

cliphd 3 Dec 26, 2021
An OrpheusDL Tidal module

OrpheusDL - Tidal A Tidal module for the OrpheusDL modular archival music program Report Bug · Request Feature Table of content About OrpheusDL - Tida

Daniel 54 Dec 29, 2022
Assignment for python course, BUPT 2021.

pyFuujinrokuDestiny Assignment for python course, BUPT 2021. Notice username and password must be ASCII encoding. If username exists in database, syst

Ellias Kiri Stuart 3 Jun 18, 2021
FCurve-Cleaner: Tries to clean your dense mocap graphs like an animator would

Tries to clean your dense mocap graphs like an animator would! So it will produce a usable artist friendly result while maintaining the original graph.

wiSHFul97 5 Aug 17, 2022
🤡 Multiple Discord selfbot src deobfuscated !

Deobfuscated selfbot sources About. If you whant to add src, please make pull requests. If you whant to deobfuscate src, send mail to

Sreecharan 5 Sep 13, 2021
Korg Volca Sample uploader for linux.

GnuVolca Korg Volca Sample uploader for linux. GnuVolca Usage Installation Via virtualenv Usage Store all the samples you want to upload on an empty d

Gonzalo Rafuls 12 Oct 11, 2022
Force you (or your user) annotate Python function type hints.

Must-typing Force you (or your user) annotate function type hints. Notice: It's more like a joke, use it carefully. If you call must_typing in your mo

Konge 13 Feb 19, 2022
Ant Colony Optimization for Traveling Salesman Problem

tsp-aco Ant Colony Optimization for Traveling Salesman Problem Dependencies Python 3.8 tqdm numpy matplotlib To run the solver run main.py from the p

Baha Eren YALDIZ 4 Feb 03, 2022
A service to display a quick summary of a project on GitHub.

A service to display a quick summary of a project on GitHub. Usage 📖 Paste the code below with details filled in as specified below into your Readme.

Rohit V 8 Dec 06, 2022
Using graph_nets for pion classification and energy regression. Contributions from LLNL and LBNL

nbdev template Use this template to more easily create your nbdev project. If you are using an older version of this template, and want to upgrade to

3 Nov 23, 2022
Karte der Allgemeinverfügungen zu Schulschließungen oder eingeschränktem Regelbetrieb in Sachsen

SNSZ Karte Datenquelle: Allgemeinverfügungen zu Schulschließungen oder eingeschränktem Regelbetrieb in Sachsen Sächsisches Staatsministerium für Kultu

Jannis Leidel 3 Sep 26, 2022
用于红队成员初步快速攻击的全自动化工具。

关于 Author:m0sway Mail:[email protected] Github:https://www.github.com/m0sway/Jud JuD是

m0sway 46 Jul 21, 2022
sfgp is a package that aggregates individual scripts and notebooks, primarily written for the basic analysis tasks of genetics and pharmacogenomics data.

sfgp is a package that aggregates individual scripts and notebooks, primarily written for the basic analysis tasks of genetics and pharmacogenomics data.

Vishal Sarsani 1 Mar 31, 2022
Amazon SageMaker Delta Sharing Examples

This repository contains examples and related resources showing you how to preprocess, train, and serve your models using Amazon SageMaker with data fetched from Delta Lake.

Eitan Sela 5 May 02, 2022
Contain the customization I made for my Linux rice.

dotfiles Contain the customization I made for my Linux rice. Credit and Respect Polybar Autohide Fulltime Rofi by adi1090x (only include my personal r

sora 3 Apr 04, 2022
Simple Python Gemini browser with nice formatting

gg I wasn't satisfied with any of the other available Gemini clients, so I wrote my own. Requires Python 3.9 (maybe older, I haven't checked) and opti

Sarah Taube 2 Nov 21, 2021
PyPIContents is an application that generates a Module Index from the Python Package Index (PyPI) and also from various versions of the Python Standard Library.

PyPIContents is an application that generates a Module Index from the Python Package Index (PyPI) and also from various versions of the Python Standar

Collage Labs 10 Nov 19, 2022
Shell scripts made simple 🐚

zxpy Shell scripts made simple 🐚 Inspired by Google's zx, but made much simpler and more accessible using Python. Rationale Bash is cool, and it's ex

Tushar Sadhwani 492 Dec 27, 2022
A streamlit app for exploring image search results from HuggingPics

title emoji colorFrom colorTo sdk app_file pinned huggingpics-explorer 🤗 blue red streamlit app.py false huggingpics-explorer A streamlit app for exp

Nathan Raw 4 Sep 10, 2022