A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

Overview

Academic-DeepNeuralNetsFromScratch

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

This project was constructed for the Introduction to Machine Learning course, class 605.649 section 84 at Johns Hopkins University. FranceLab4 is a machine learning toolkit that implements several algorithms for classification and regression tasks. Specifically, the toolkit coordinates a linear network, a logistic regressor, an autoencoder, and a neural network that implements backpropagation; it also leverages data structures built in the preceding labs. FranceLab4 is a software module written in Python 3.7 that facilitates such algorithms.

##Notes for Graders All files of concern for this project (with the exception of main.py) may be found in the Linear_Network, Logistic_Regression, and Neural_Network folders. I kept most of my files from Projects 1, 2, and 3 because I ended up using cross validation, encoding, and other helper methods. However, these three folders contains the neural network algorithms of interest.

I have created blocks of code for you to test and run each algorithm if you choose to do so. In __main__.py scroll to the bottom and find the main function. Simply comment or uncomment blocks of code to test if desired.

Each neural network and autoencoder constructed are sub-classed / inherited from the NeuralNet class in neural_net.py. I simply initialize the class differently in order to construct an autoencoder, a feed-forward neural network, or a combination of both.

Data produced in my paper were run with KFCV. However within the main program, you may notice that the number of folds k has been reduced to 2 to make the analysis quicker and the console output easier to follow.

The construction of a linear network begins on line 84 in __main__.py.

The construction of a logistic regressor begins on line 102 in __main__.py.

The construction of an autoencoder only begins on line 128 in __main__.py.

The construction of a feed-forward neural network only begins on line 141 in __main__.py.

The construction of an autoencoder that is trained, the decoder removed, and the encoder attached to a new hidden layer with a prediction layer attached to form a new neural network begins on line 221 in __main__.py.

The code for the weight updates and backward and forward propagation may be found in the following files within the Neural_Network folder:

  • layer.py
  • optimizer_function.py
  • neural_net.py

__main__.py is the driver behind importing the dataset, cleaning the data, coordinating KFCV, and initializing each of the neural network algorithms.

Running FranceLab4

  1. Ensure Python 3.7 is installed on your computer.
  2. Navigate to the Lab4 directory. For example, cd User\Documents\PythonProjects\FranceLab4. Do NOT cd into the Lab4 module.
  3. Run the program as a module: python3 -m Lab4.
  4. Input and output files ar located in the io_files subdirectory.

FranceLab4 Usage

usage: python3 -m Lab4
Owner
Kordel K. France
Artificial Intelligence Engineer, Algorithmic Trader. I build software that finds order within chaos.
Kordel K. France
Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020)

Causality In Traffic Accident (Under Construction) Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020) Overview Data Prepa

Tackgeun 21 Nov 20, 2022
MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space

Update (20 Jan 2020): MODALS on text data is avialable MODALS MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space Table of Conte

38 Dec 15, 2022
An easier way to build neural search on the cloud

An easier way to build neural search on the cloud Jina is a deep learning-powered search framework for building cross-/multi-modal search systems (e.g

Jina AI 17k Jan 02, 2023
PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation.

PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation. Warning: the master branch might collapse. To ob

559 Dec 14, 2022
FMA: A Dataset For Music Analysis

FMA: A Dataset For Music Analysis Michaël Defferrard, Kirell Benzi, Pierre Vandergheynst, Xavier Bresson. International Society for Music Information

Michaël Defferrard 1.8k Dec 29, 2022
Image Segmentation Evaluation

Image Segmentation Evaluation Martin Keršner, [email protected] Evaluation

Martin Kersner 273 Oct 28, 2022
Implementation of paper "Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal"

Patch-wise Adversarial Removal Implementation of paper "Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal

4 Oct 12, 2022
A simple baseline for 3d human pose estimation in PyTorch.

3d_pose_baseline_pytorch A PyTorch implementation of a simple baseline for 3d human pose estimation. You can check the original Tensorflow implementat

weigq 312 Jan 06, 2023
Bayesian algorithm execution (BAX)

Bayesian Algorithm Execution (BAX) Code for the paper: Bayesian Algorithm Execution: Estimating Computable Properties of Black-box Functions Using Mut

Willie Neiswanger 38 Dec 08, 2022
Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

1.3k Jan 02, 2023
Checkout some cool self-projects you can try your hands on to curb your boredom this December!

SoC-Winter Checkout some cool self-projects you can try your hands on to curb your boredom this December! These are short projects that you can do you

Web and Coding Club, IIT Bombay 29 Nov 08, 2022
Open source implementation of "A Self-Supervised Descriptor for Image Copy Detection" (SSCD).

A Self-Supervised Descriptor for Image Copy Detection (SSCD) This is the open-source codebase for "A Self-Supervised Descriptor for Image Copy Detecti

Meta Research 68 Jan 04, 2023
Vector.ai assignment

fabio-tests-nisargatman Low Level Approach: ###Tables: continents: id*, name, population, area, createdAt, updatedAt countries: id*, name, population,

Ravi Pullagurla 1 Nov 09, 2021
A NSFW content filter.

Project_Nfilter A NSFW content filter. With a motive of minimizing the spreads and leakage of NSFW contents on internet and access to others devices ,

1 Jan 20, 2022
FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.

FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation [Project] [Paper] [arXiv] [Home] Official implementation of FastFCN:

Wu Huikai 815 Dec 29, 2022
Code for generating a single image pretraining dataset

Single Image Pretraining of Visual Representations As shown in the paper A critical analysis of self-supervision, or what we can learn from a single i

Yuki M. Asano 12 Dec 19, 2022
This repository contains code accompanying the paper "An End-to-End Chinese Text Normalization Model based on Rule-Guided Flat-Lattice Transformer"

FlatTN This repository contains code accompanying the paper "An End-to-End Chinese Text Normalization Model based on Rule-Guided Flat-Lattice Transfor

THUHCSI 74 Nov 28, 2022
The codebase for our paper "Generative Occupancy Fields for 3D Surface-Aware Image Synthesis" (NeurIPS 2021)

Generative Occupancy Fields for 3D Surface-Aware Image Synthesis (NeurIPS 2021) Project Page | Paper Xudong Xu, Xingang Pan, Dahua Lin and Bo Dai GOF

xuxudong 97 Nov 10, 2022
SberSwap Video Swap base on deep learning

SberSwap Video Swap base on deep learning

Sber AI 431 Jan 03, 2023
Super-Fast-Adversarial-Training - A PyTorch Implementation code for developing super fast adversarial training

Super-Fast-Adversarial-Training This is a PyTorch Implementation code for develo

LBK 26 Dec 02, 2022