Credit Fraud detection: Context: It is important that credit card companies are able to recognize fraudulent credit card transactions so that customers are not charged for items that they did not purchase. Dataset Location : This dataset could be found at https://www.kaggle.com/mlg-ulb/creditcardfraud This dataset (creditcard.csv) was provided by KAGGLE The dataset contains transactions made by credit cards in September 2013 by European cardholders. It contains only numerical input variables which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, … V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction Amount, this feature can be used for example-dependant cost-sensitive learning. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise. This dataset is already preprocessed. I began with splitting the dataset into train and test sets with a split of 0.75:0.25, Did a brief analysis and checked that the dataset contains 99.8% of the values are labeled as not fraud and only 0.2% are labeled as fraud. I bootstrapped the data by upsampling the training dataset because if we had only a few positives relative to negatives, the training model will spend most of its time on negative examples and not learn enough from positive ones. Therefore I bootstrapped the data to make it balanced. Then I applied Random Forest with the number of trees = 20 and determined which were the most important features for our model. I followed with Logistic Regression Then finally I followed by a Gaussian Naive Bayes I tested all three models for accuracy, precision, recall and f1 score. The Random Forest model has better accuaracy and precision than the Logistic Regression and Gaussian Naive Bayes models, but Logistic regression has the best recall, yet Random Forest has the best f1 score which is the harmonic average between precision and recall.
Credit fraud detection in Python using a Jupyter Notebook
Overview
A repository for generating stylized talking 3D and 3D face
style_avatar A repository for generating stylized talking 3D faces and 2D videos. This is the repository for paper Imitating Arbitrary Talking Style f
Generate high quality pictures. GAN. Generative Adversarial Networks
ESRGAN generate high quality pictures. GAN. Generative Adversarial Networks """ Super-resolution of CelebA using Generative Adversarial Networks. The
VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries
VACA Code repository for the paper "VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries (arXiv)". The impleme
Keras implementation of Deeplab v3+ with pretrained weights
Keras implementation of Deeplabv3+ This repo is not longer maintained. I won't respond to issues but will merge PR DeepLab is a state-of-art deep lear
Official PyTorch Implementation of Mask-aware IoU and maYOLACT Detector [BMVC2021]
The official implementation of Mask-aware IoU and maYOLACT detector. Our implementation is based on mmdetection. Mask-aware IoU for Anchor Assignment
An example of Scatterbrain implementation (combining local attention and Performer)
An example of Scatterbrain implementation (combining local attention and Performer)
An Evaluation of Generative Adversarial Networks for Collaborative Filtering.
An Evaluation of Generative Adversarial Networks for Collaborative Filtering. This repository was developed by Fernando B. Pérez Maurera. Fernando is
Free-duolingo-plus - Duolingo account creator that uses your invite code to get you free duolingo plus
free-duolingo-plus duolingo account creator that uses your invite code to get yo
Implementation of Graph Convolutional Networks in TensorFlow
Graph Convolutional Networks This is a TensorFlow implementation of Graph Convolutional Networks for the task of (semi-supervised) classification of n
Music Generation using Neural Networks Streamlit App
Music_Gen_Streamlit "Music Generation using Neural Networks" Streamlit App TO DO: Make a run_app.sh Introduction [~5 min] (Sohaib) Team Member names/i
SingleVC performs any-to-one VC, which is an important component of MediumVC project.
SingleVC performs any-to-one VC, which is an important component of MediumVC project. Here is the official implementation of the paper, MediumVC.
An Api for Emotion recognition.
PLAYEMO Playemo was built from the ground-up with Flask, a python tool that makes it easy for developers to build APIs. Use Cases Is Python your langu
SCALoss: Side and Corner Aligned Loss for Bounding Box Regression (AAAI2022).
SCALoss PyTorch implementation of the paper "SCALoss: Side and Corner Aligned Loss for Bounding Box Regression" (AAAI 2022). Introduction IoU-based lo
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.
Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [PaddlePaddle Implementation] Homepage of paper: Paint Transformer: Fee
The source code for Adaptive Kernel Graph Neural Network at AAAI2022
AKGNN The source code for Adaptive Kernel Graph Neural Network at AAAI2022. Please cite our paper if you think our work is helpful to you: @inproceedi
LIVECell - A large-scale dataset for label-free live cell segmentation
LIVECell dataset This document contains instructions of how to access the data associated with the submitted manuscript "LIVECell - A large-scale data
Plato: A New Framework for Federated Learning Research
a new software framework to facilitate scalable federated learning research.
Layer 7 DDoS Panel with Cloudflare Bypass ( UAM, CAPTCHA, BFM, etc.. )
Blood Deluxe DDoS DDoS Attack Panel includes CloudFlare Bypass (UAM, CAPTCHA, BFM, etc..)(It works intermittently. Working on it) Don't attack any web
Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm
DeCLIP Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm. Our paper is available in arxiv Updates ** Ou
[CVPR 2021] Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision
TorchSemiSeg [CVPR 2021] Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision by Xiaokang Chen1, Yuhui Yuan2, Gang Zeng1, Jingdong Wang