To effectively detect the faulty wafers

Overview

wafer_fault_detection

Aim of the project:

In electronics, a wafer (also called a slice or substrate) is a thin slice of semiconductor, such as crystalline silicon (c-Si), used for the fabrication of integrated circuits and, in photovoltaics, to manufacture solar cells. The wafer serves as the substrate for microelectronic devices built in and upon the wafer. The project aims to successfully identify the state of the provided wafer by classifying it between one of the two-class +1 (good, can be used as a substrate) or -1 (bad, the substrate need to be replaced). In this regard, a training dataset is provided to build a machine learning classification model, which can predict the wafer quality.

Data Description:

The columns of provided data can be classified into 3 parts: wafer name, sensor values and label. The wafer name contains the batch number of the wafer, whereas the sensor values obtained from the measurement carried out on the wafer. The label column contains two unique values +1 and -1 that identifies if the wafer is good or need to be replaced. Additionally, we also require a schema file, which contains all the relevant information about the training files such as file names, length of date value in the file name, length of time value in the file name, number of columns, name of the columns, and their datatype.

Directory creation:

All the necessary folders were created to effectively separate the files so that the end-user can get easy access to them.

Data Validation:

In this step, we matched our dataset with the provided schema file to match the file names, the number of columns it should contain, their names as well as their datatype. If the files matched with the schema values, then it is considered a good file on which we can train or predict our model, if not then the files are considered as bad and moved to the bad folder. Moreover, we also identify the columns with null values. If the whole column data is missing then we also consider the file as bad, on the contrary, if only a fraction of data in a column is missing then we initially fill it with NaN and consider it as good data.

Data Insertion in Database:

First, we create a database with the given name passed. If the database is already created, open the connection to the database. A table with the name- "train_good_raw_dt" or "pred_good_raw_dt" is created in the database, based on training or prediction, for inserting the good data files obtained from the data validation step. If the table is already present, then the new table is not created, and new files are inserted in the already present table as we want training to be done on new as well as old training files. In the end, the data in a stored database is exported as a CSV file to be used for model training.

Data Pre-processing and Model Training:

In the training section, first, the data is checked for the NaN values in the columns. If present, impute the NaN values using the KNN imputer. The column with zero standard deviation was also identified and removed as they don't give any information during model training. A prediction schema was created based on the remained dataset columns. Afterwards, the KMeans algorithm is used to create clusters in the pre-processed data. The optimum number of clusters is selected by plotting the elbow plot, and for the dynamic selection of the number of clusters, we are using the "KneeLocator" function. The idea behind clustering is to implement different algorithms to train data in different clusters. The Kmeans model is trained over pre-processed data and the model is saved for further use in prediction. After clusters are created, we find the best model for each cluster. We are using four algorithms, "Random Forest" “K Neighbours”, “Logistic Regression” and "XGBoost". For each cluster, both the algorithms are passed with the best parameters derived from GridSearch. We calculate the AUC scores for both models and select the model with the best score. Similarly, the best model is selected for each cluster. All the models for every cluster are saved for use in prediction. In the end, the confusion matrix of the model associated with every cluster is also saved to give a glance at the performance of the models.

Prediction:

In data prediction, first, the essential directories are created. The data validation, data insertion and data processing steps are similar to the training section. The KMeans model created during training is loaded, and clusters for the pre-processed prediction data is predicted. Based on the cluster number, the respective model is loaded and is used to predict the data for that cluster. Once the prediction is made for all the clusters, the predictions along with the Wafer names are saved in a CSV file at a given location.

Deployment:

We will be deploying the model to Heroku Cloud.

Owner
Arun Singh Babal
Engineer | Data Science Enthusiasts | Machine Learning | Deep Learning | Advanced Computer Vision.
Arun Singh Babal
Data-driven Computer Science UoB

COMS20011_2021 Data-driven Computer Science UoB Staff Laurence Aitchison [ 6 May 16, 2022

Multitrack exporter for OP-Z

Underbridge for OP-Z Multitrack exporter Description Exports patterns and projects individual audio tracks to seperate folders for use in your DAW. Py

Thomas Herrmann 71 Dec 25, 2022
Qt-creator-boost-debugging-helper - Qt Creator Debugging Helper for Boost Library

Go to Tools Options Debugger Locals & Expressions. Paste the script path t

Dmitry Bravikov 2 Apr 22, 2022
A simple interface to help lazy people like me to shutdown/reboot/sleep their computer remotely.

🦥 Lazy Helper ! A simple interface to help lazy people like me to shut down/reboot/sleep/lock/etc. their computer remotely. - USAGE If you're a lazy

MeHDI Rh 117 Nov 30, 2022
This Python library searches through a static directory and appends artist, title, track number, album title, duration, and genre to a .json object

This Python library searches through a static directory (needs to match your environment) and appends artist, title, track number, album title, duration, and genre to a .json object. This .json objec

Edan Ybarra 1 Jun 20, 2022
pybicyclewheel calulates the required spoke length for bicycle wheels

pybicyclewheel pybicyclewheel calulates the required spoke length for bicycle wheels. (under construcion) - homepage further readings wikipedia bicyc

karl 0 Aug 24, 2022
Mail Me My Social Media stats (SoMeMailMe)

Mail Me My Social Media follower count (SoMeMailMe) TikTok only show data 60 days back in time. With this repo you can easily scrape your follower cou

Daniel Wigh 1 Jan 07, 2022
Web UI for your scripts with execution management

Script-server is a Web UI for scripts. As an administrator, you add your existing scripts into Script server and other users would be ab

Iaroslav Shepilov 1.1k Jan 09, 2023
New multi tool im making adding features currently

Emera Multi Tool New multi tool im making adding features currently Current List of Planned Features - Linkvertise Bypasser - Discord Auto Bump - Gith

Lamp 3 Dec 03, 2021
This is a modified variation of abhiTronix's vidgear. In this variation, it is possible to write the output file anywhere regardless the permissions.

Info In order to download this package: Windows 10: Press Windows+S, Type PowerShell (cmd in older versions) and hit enter, Type pip install vidgear_n

Ege Akman 3 Jan 30, 2022
External Network Pentest Automation using Shodan API and other tools.

Chopin External Network Pentest Automation using Shodan API and other tools. Workflow Input a file containing CIDR ranges. Converts CIDR ranges to ind

Aditya Dixit 9 Aug 04, 2022
Here You will Find CodeChef Challenge Solutions

Here You will Find CodeChef Challenge Solutions

kanishk kashyap 1 Sep 03, 2022
Cairo hooks for pre-commit

pre-commit-cairo Cairo hooks for pre-commit. See pre-commit for more details Using pre-commit-cairo with pre-commit Add this to your .pre-commit-confi

Fran Algaba 16 Sep 21, 2022
This tool don't used illegal ativity

ETHICALTOOL This tool for only educational purposes don't used illegal ativity @onlinehacking this tool for pkg update && pkg upgrade && pkg install g

Mrkarthick 4 Dec 23, 2021
You will need to install a few python packages for this one.

Features Bait support Auto repair will repair every 10 catches Anti detection (still a work in progress) but using random times and click positions Pr

12 Sep 21, 2022
Exploring basic lambda calculus in Python

Lambda Exploring basic lambda calculus in Python. In this repo I have used the lambda function built into python to get a more intiutive feel of lambd

Bhardwaj Bhaskar 2 Nov 12, 2021
Scientific color maps and standardization tools

Scicomap is a package that provides scientific color maps and tools to standardize your favourite color maps if you don't like the built-in ones. Scicomap currently provides sequential, bi-sequential

Thomas Bury 14 Nov 30, 2022
Catalogue CRUD Application

This Python program creates a relational SQL database hosted on the Snowflake platform, then opens a CRUD GUI to manipulate and view the data. In this application, it is used as a book catalogue. CUR

0 Dec 13, 2022
Antchain-MPC is a library of MPC (Multi-Parties Computation)

Antchain-MPC Antchain-MPC is a library of MPC (Multi-Parties Computation). It include Morse-STF: A tool for machine learning using MPC. Others: Commin

Alipay 37 Nov 22, 2022
Example python package with pybind11 cpp extension

Developing C++ extension in Python using pybind11 This is a summary of the commands used in the tutorial.

55 Sep 04, 2022