Credit Fraud detection: Context: It is important that credit card companies are able to recognize fraudulent credit card transactions so that customers are not charged for items that they did not purchase. Dataset Location : This dataset could be found at https://www.kaggle.com/mlg-ulb/creditcardfraud This dataset (creditcard.csv) was provided by KAGGLE The dataset contains transactions made by credit cards in September 2013 by European cardholders. It contains only numerical input variables which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, … V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction Amount, this feature can be used for example-dependant cost-sensitive learning. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise. This dataset is already preprocessed. I began with splitting the dataset into train and test sets with a split of 0.75:0.25, Did a brief analysis and checked that the dataset contains 99.8% of the values are labeled as not fraud and only 0.2% are labeled as fraud. I bootstrapped the data by upsampling the training dataset because if we had only a few positives relative to negatives, the training model will spend most of its time on negative examples and not learn enough from positive ones. Therefore I bootstrapped the data to make it balanced. Then I applied Random Forest with the number of trees = 20 and determined which were the most important features for our model. I followed with Logistic Regression Then finally I followed by a Gaussian Naive Bayes I tested all three models for accuracy, precision, recall and f1 score. The Random Forest model has better accuaracy and precision than the Logistic Regression and Gaussian Naive Bayes models, but Logistic regression has the best recall, yet Random Forest has the best f1 score which is the harmonic average between precision and recall.
Credit fraud detection in Python using a Jupyter Notebook
Overview
This repository includes the official project for the paper: TransMix: Attend to Mix for Vision Transformers.
TransMix: Attend to Mix for Vision Transformers This repository includes the official project for the paper: TransMix: Attend to Mix for Vision Transf
Implementation of momentum^2 teacher
Momentum^2 Teacher: Momentum Teacher with Momentum Statistics for Self-Supervised Learning Requirements All experiments are done with python3.6, torch
A curated list of Generative Deep Art projects, tools, artworks, and models
Generative Deep Art A curated list of Generative Deep Art projects, tools, artworks, and models Inbox Get started with making AI art in 2022 – deeplea
3D cascade RCNN for object detection on point cloud
3D Cascade RCNN This is the implementation of 3D Cascade RCNN: High Quality Object Detection in Point Clouds. We designed a 3D object detection model
The missing CMake project initializer
cmake-init - The missing CMake project initializer Opinionated CMake project initializer to generate CMake projects that are FetchContent ready, separ
Supplementary code for SIGGRAPH 2021 paper: Discovering Diverse Athletic Jumping Strategies
SIGGRAPH 2021: Discovering Diverse Athletic Jumping Strategies project page paper demo video Prerequisites Important Notes We suspect there are bugs i
Learning Time-Critical Responses for Interactive Character Control
Learning Time-Critical Responses for Interactive Character Control Abstract This code implements the paper Learning Time-Critical Responses for Intera
FedML: A Research Library and Benchmark for Federated Machine Learning
FedML: A Research Library and Benchmark for Federated Machine Learning 📄 https://arxiv.org/abs/2007.13518 News 2021-02-01 (Award): #NeurIPS 2020# Fed
Class-Attentive Diffusion Network for Semi-Supervised Classification [AAAI'21] (official implementation)
Class-Attentive Diffusion Network for Semi-Supervised Classification Official Implementation of AAAI 2021 paper Class-Attentive Diffusion Network for
Code & Data for Enhancing Photorealism Enhancement
Code & Data for Enhancing Photorealism Enhancement
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)
Depth-supervised NeRF: Fewer Views and Faster Training for Free Project | Paper | YouTube Pytorch implementation of our method for learning neural rad
Official repository for: Continuous Control With Ensemble DeepDeterministic Policy Gradients
Continuous Control With Ensemble Deep Deterministic Policy Gradients This repository is the official implementation of Continuous Control With Ensembl
Implementation of Feedback Transformer in Pytorch
Feedback Transformer - Pytorch Simple implementation of Feedback Transformer in Pytorch. They improve on Transformer-XL by having each token have acce
这是一个facenet-pytorch的库,可以用于训练自己的人脸识别模型。
Facenet:人脸识别模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download 预测步骤 How2predict 训练步骤 How2train 参考资料 Reference 性能情况 训练数据
Continuous Conditional Random Field Convolution for Point Cloud Segmentation
CRFConv This repository is the implementation of "Continuous Conditional Random Field Convolution for Point Cloud Segmentation" 1. Setup 1) Building c
Few-shot Learning of GPT-3
Few-shot Learning With Language Models This is a codebase to perform few-shot "in-context" learning using language models similar to the GPT-3 paper.
Code for the Image similarity challenge.
ISC 2021 This repository contains code for the Image Similarity Challenge 2021. Getting started The docs subdirectory has step-by-step instructions on
An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.
An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise
[NIPS 2021] UOTA: Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration.
UOTA: Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration This repository is the official PyTorch implementation of UOT
Neural Logic Inductive Learning
Neural Logic Inductive Learning This is the implementation of the Neural Logic Inductive Learning model (NLIL) proposed in the ICLR 2020 paper: Learn