To effectively detect the faulty wafers

Overview

wafer_fault_detection

Aim of the project:

In electronics, a wafer (also called a slice or substrate) is a thin slice of semiconductor, such as crystalline silicon (c-Si), used for the fabrication of integrated circuits and, in photovoltaics, to manufacture solar cells. The wafer serves as the substrate for microelectronic devices built in and upon the wafer. The project aims to successfully identify the state of the provided wafer by classifying it between one of the two-class +1 (good, can be used as a substrate) or -1 (bad, the substrate need to be replaced). In this regard, a training dataset is provided to build a machine learning classification model, which can predict the wafer quality.

Data Description:

The columns of provided data can be classified into 3 parts: wafer name, sensor values and label. The wafer name contains the batch number of the wafer, whereas the sensor values obtained from the measurement carried out on the wafer. The label column contains two unique values +1 and -1 that identifies if the wafer is good or need to be replaced. Additionally, we also require a schema file, which contains all the relevant information about the training files such as file names, length of date value in the file name, length of time value in the file name, number of columns, name of the columns, and their datatype.

Directory creation:

All the necessary folders were created to effectively separate the files so that the end-user can get easy access to them.

Data Validation:

In this step, we matched our dataset with the provided schema file to match the file names, the number of columns it should contain, their names as well as their datatype. If the files matched with the schema values, then it is considered a good file on which we can train or predict our model, if not then the files are considered as bad and moved to the bad folder. Moreover, we also identify the columns with null values. If the whole column data is missing then we also consider the file as bad, on the contrary, if only a fraction of data in a column is missing then we initially fill it with NaN and consider it as good data.

Data Insertion in Database:

First, we create a database with the given name passed. If the database is already created, open the connection to the database. A table with the name- "train_good_raw_dt" or "pred_good_raw_dt" is created in the database, based on training or prediction, for inserting the good data files obtained from the data validation step. If the table is already present, then the new table is not created, and new files are inserted in the already present table as we want training to be done on new as well as old training files. In the end, the data in a stored database is exported as a CSV file to be used for model training.

Data Pre-processing and Model Training:

In the training section, first, the data is checked for the NaN values in the columns. If present, impute the NaN values using the KNN imputer. The column with zero standard deviation was also identified and removed as they don't give any information during model training. A prediction schema was created based on the remained dataset columns. Afterwards, the KMeans algorithm is used to create clusters in the pre-processed data. The optimum number of clusters is selected by plotting the elbow plot, and for the dynamic selection of the number of clusters, we are using the "KneeLocator" function. The idea behind clustering is to implement different algorithms to train data in different clusters. The Kmeans model is trained over pre-processed data and the model is saved for further use in prediction. After clusters are created, we find the best model for each cluster. We are using four algorithms, "Random Forest" “K Neighbours”, “Logistic Regression” and "XGBoost". For each cluster, both the algorithms are passed with the best parameters derived from GridSearch. We calculate the AUC scores for both models and select the model with the best score. Similarly, the best model is selected for each cluster. All the models for every cluster are saved for use in prediction. In the end, the confusion matrix of the model associated with every cluster is also saved to give a glance at the performance of the models.

Prediction:

In data prediction, first, the essential directories are created. The data validation, data insertion and data processing steps are similar to the training section. The KMeans model created during training is loaded, and clusters for the pre-processed prediction data is predicted. Based on the cluster number, the respective model is loaded and is used to predict the data for that cluster. Once the prediction is made for all the clusters, the predictions along with the Wafer names are saved in a CSV file at a given location.

Deployment:

We will be deploying the model to Heroku Cloud.

Owner
Arun Singh Babal
Engineer | Data Science Enthusiasts | Machine Learning | Deep Learning | Advanced Computer Vision.
Arun Singh Babal
HiQ - A Modern Observability System

🦉 A Modern Observability System HiQ is a declarative, non-intrusive, dynamic and transparent tracking system for both monolithic application and dist

Oracle Sample Code 40 Aug 21, 2022
Arknights gacha simulation written in Python

Welcome to arknights-gacha repository This is my shameless attempt of simulating Arknights gacha. Current supported banner types (with potential bugs)

Swyrin 3 May 07, 2022
A calculator to test numbers against the collatz conjecture

The Collatz Calculator This is an algorithm custom built by Kyle Dickey, used to test numbers against the simple rules of the Collatz Conjecture.

Kyle Dickey 2 Jun 14, 2022
Percolation simulation using python

PythonPercolation Percolation simulation using python Exemple de percolation : Etude statistique sur le pourcentage de remplissage jusqu'à percolation

Tony Chouteau 1 Sep 08, 2022
Ant Colony Optimization for Traveling Salesman Problem

tsp-aco Ant Colony Optimization for Traveling Salesman Problem Dependencies Python 3.8 tqdm numpy matplotlib To run the solver run main.py from the p

Baha Eren YALDIZ 4 Feb 03, 2022
A compiler for ARM, X86, MSP430, xtensa and more implemented in pure Python

Introduction The PPCI (Pure Python Compiler Infrastructure) project is a compiler written entirely in the Python programming language. It contains fro

Windel Bouwman 277 Dec 26, 2022
Un Assistente Vocale scritto in Python e altamente personalizzabile

Un Assistente Vocale scritto in Python e altamente personalizzabile

Marco 2 May 06, 2022
Tracking development of the Class Schedule Siri Shortcut, an iOS program that checks the type of school day and tells you class scheduling.

Class Schedule Shortcut Tracking development of the Class Schedule Siri Shortcut, an iOS program that checks the type of school day and tells you clas

3 Jun 28, 2022
A tool that bootstraps your dotfiles ⚡️

Dotbot Dotbot makes installing your dotfiles as easy as git clone $url && cd dotfiles && ./install, even on a freshly installed system! Rationale Gett

Anish Athalye 5.9k Jan 07, 2023
Basic-Killfeed - A simple DayZ Console Killfeed

Basic-Killfeed A simple DayZ Console Killfeed. Setup Install Python Version 3.10

Nick 1 Apr 25, 2022
GitHub Actions Version Updater Updates All GitHub Action Versions in a Repository and Creates a Pull Request with the Changes.

GitHub Actions Version Updater GitHub Actions Version Updater is GitHub Action that is used to update other GitHub Actions in a Repository and create

Maksudul Haque 42 Dec 22, 2022
Craxk is a SINGLE AND NON-REPLICABLE Hash that uses data from the hardware where it is executed to form a hash that can only be reproduced by a single machine.

What is Craxk ? Craxk is a UNIQUE AND NON-REPLICABLE Hash that uses data from the hardware where it is executed to form a hash that can only be reprod

5 Jun 19, 2021
Python package that mirrors the original Nodejs ReplAPI-It.

Python-ReplAPI-It Python package that mirrors the original Nodejs ReplAPI-It. Contributing First fork the repo: $ git clone https://github.com/ReplAPI

The ReplAPI.it Project 10 Jun 05, 2022
Heads Down Application for Mac OSX

Heads Down A Mac app that lives in your ribbon—with a click of the mouse, temporarily block distracting websites and applications to encourage "heads

20 Mar 10, 2021
External Network Pentest Automation using Shodan API and other tools.

Chopin External Network Pentest Automation using Shodan API and other tools. Workflow Input a file containing CIDR ranges. Converts CIDR ranges to ind

Aditya Dixit 9 Aug 04, 2022
Python data loader for Solar Orbiter's (SolO) Energetic Particle Detector (EPD).

Data loader (and downloader) for Solar Orbiter/EPD energetic charged particle sensors EPT, HET, and STEP. Supports level 2 and low latency data provided by ESA's Solar Orbiter Archive.

Jan Gieseler 9 Dec 16, 2022
Adjust the white point, gamma or make your XDR display darker without losing HDR peak luminance or the ability to adjust display brightness

XDR Tuner Adjust the white point, gamma or make your XDR display darker without losing HDR peak luminance or the ability to adjust display brightness

François Simond 16 Dec 28, 2022
Distribute PySPI jobs across a PBS cluster

Distribute PySPI jobs across a PBS cluster This repository contains scripts for distributing PySPI jobs across a PBS-type cluster. Each job will conta

Oliver Cliff 1 Feb 10, 2022
A free website that keeps the people informed about housing and evictions.

Eviction Tracker Currently helping verify detainer warrant data for middle Tennessee - via Middle TN DSA - Red Door Collective Features Presents data

Red Door Collective 7 Dec 14, 2022
Webcash is an experimental e-cash (electronic cash)

Webcash Webcash is an experimental new electronic cash ("e-cash") that enables decentralized and instant payments to anyone, anywhere in the world. Us

Mark Friedenbach 0 Feb 26, 2022