A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.

Related tags

Deep Learningcrysx_nn
Overview

Contributors Forks Stargazers Issues MIT License LinkedIn


crysx_nn

A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Features
  5. Roadmap
  6. Contributing
  7. License
  8. Contact
  9. Acknowledgments
  10. Citation

About The Project

Product Name Screen Shot

Neural networks are an integral part of machine learning. The project provides an easy-to-use, yet efficient implementation that can be used in your projects or for teaching/learning purposes.

The library is written in pure-python with some optimizations using numpy, opt_einsum, and numba when using CPU and cupy for CUDA support.

The goal was to create a framework that is efficient yet easy to understand, so that everyone can see and learn about what goes inside a neural network. After all, the project did spawn from a semester project on CP_IV: Machine Learning course at the University of Jena, Germany.

(back to top)

Built With

(back to top)

Getting Started

To get a local copy up and running follow these simple example steps.

Prerequisites

You need to have python3 installed along with pip.

Installation

There are many ways to install crysx_nn

  1. Install the release (stable) version from PyPi
    pip install crysx_nn
  2. Install the latest development version, by cloning the git repo and installing it. This requires you to have git installed.
    git clone https://github.com/manassharma07/crysx_nn.git
    cd crysx_nn
    pip install .
  3. Install the latest development version without git.
    pip install --upgrade https://github.com/manassharma07/crysx_nn/tarball/main

Check if the installation was successful by running python shell and trying to import the package

python3
>> import crysx_nn >>> crysx_nn.__version__ '0.1.0' >>> ">
Python 3.7.11 (default, Jul 27 2021, 07:03:16) 
[Clang 10.0.0 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import crysx_nn
>>> crysx_nn.__version__
'0.1.0'
>>> 

Finally download the example script (here) for simulating logic gates like AND, XOR, NAND, and OR, and try running it

python Simluating_logic_gates.py

(back to top)

Usage

The most important thing for using this library properly is to use 2D NumPy arrays for defining the inputs and exoected outputs (targets) for a network. 1D arrays for inputs and targets are not supported and will result in an error.

For example, let us try to simulate the logic gate AND. The AND gate takes two input bits and returns a single input bit. The bits can take a value of either 0 or 1. The AND gate returns 1 only if both the inputs are 1, otherwise it returns 0.

The truth table of the AND gate is as follows

x1 x2 output
0 0 0
0 1 0
1 0 0
1 1 1

The four possible set of inputs are

inputs = np.array([[0.,0.,1.,1.],[0.,1.,0.,1.]]).T.astype('float32')
print(inputs)
print(inputs.dtype) 

Output:

[[0. 0.]
 [0. 1.]
 [1. 0.]
 [1. 1.]]
float32

Similarly, set the corresponding four possible outputs as a 2D numpy array

# AND outputs
outputAND = np.array([0.,0.,0.,1.]) # 1D array
outputAND = np.asarray([outputAND]).T # 2D array
print('AND outputs\n', outputAND)

Output:

AND outputs
 [[0.]
 [0.]
 [0.]
 [1.]]

Next, we need to set some parameters of our Neural network

nInputs = 2 # No. of nodes in the input layer
neurons_per_layer = [3,1] # Neurons per layer (excluding the input layer)
activation_func_names = ['Sigmoid', 'Sigmoid']
nLayers = len(neurons_per_layer)
eeta = 0.5 # Learning rate
nEpochs=10**4 # For stochastic gradient descent
batchSize = 4 # No. of input samples to process at a time for optimization

For a better understanding, let us visualize it.

visualize(nInputs, neurons_per_layer, activation_func_names)

Output:

Now let us initialize the weights and biases. Weights and biases are provided as lists of 2D and 1D NumPy arrays, respectively (1 Numpy array for each layer). In our case, we have 2 layers (1 hidden+ 1 output), therefore, the list of Weights and Biases will have 2 NumPy arrays each.

# Initial guesses for weights
w1 = 0.30
w2 = 0.55
w3 = 0.20
w4 = 0.45
w5 = 0.50
w6 = 0.35
w7 = 0.15
w8 = 0.40
w9 = 0.25

# Initial guesses for biases
b1 = 0.60
b2 = 0.05

# need to use a list instead of a numpy array, since the 
#weight matrices at each layer are not of the same dimensions
weights = [] 
# Weights for layer 1 --> 2
weights.append(np.array([[w1,w4],[w2, w5], [w3, w6]]))
# Weights for layer 2 --> 3
weights.append(np.array([[w7, w8, w9]]))
# List of biases at each layer
biases = []
biases.append(np.array([b1,b1,b1]))
biases.append(np.array([b2]))

weightsOriginal = weights
biasesOriginal = biases

print('Weights matrices: ',weights)
print('Biases: ',biases)

Output:

Weights matrices:  [array([[0.3 , 0.45],
       [0.55, 0.5 ],
       [0.2 , 0.35]]), array([[0.15, 0.4 , 0.25]])]
Biases:  [array([0.6, 0.6, 0.6]), array([0.05])]

Finally it is time to train our neural network. We will use mean squared error (MSE) loss function as the metric of performance. Currently, only stochastic gradient descent is supported.

# Run optimization
optWeights, optBiases, errorPlot = nn_optimize_fast(inputs, outputAND, activation_func_names, nLayers, nEpochs=nEpochs, batchSize=batchSize, eeta=eeta, weights=weightsOriginal, biases=biasesOriginal, errorFunc=MSE_loss, gradErrorFunc=MSE_loss_grad,miniterEpoch=1,batchProgressBar=False,miniterBatch=100)

The function nn_optimize_fast returns the optimized weights and biases, as well as the error at each epoch of the optimization.

We can then plot the training loss at each epoch

# Plot the error vs epochs
plt.plot(errorPlot)
plt.yscale('log')
plt.show()

Output: For more examples, please refer to the Examples Section

CrysX-NN (crysx_nn) also provides CUDA support by using cupy versions of all the features ike activation functions, loss functions, neural network calculations, etc. Note: For small networks the Cupy versions may actually be slower than CPU versions. But the benefit becomes evident as you go beyond 1.5 Million parameters.

(back to top)

Features

  • Efficient implementations of activation functions and their gradients
    • Sigmoid, Sigmoid_grad
    • ReLU, ReLU_grad
    • Softmax, Softmax_grad
    • Softplus, Sofplus_grad
    • Tanh, Tanh_grad
    • Tanh_offset, Tanh_offset_grad
    • Identity, Identity_grad
  • Efficient implementations of loss functions and their gradients
    • Mean squared error
    • Binary cross entropy
  • Neural network optimization using
    • Stochastic Gradient Descent
  • Support for batched inputs, i.e., supplying a matrix of inputs where the collumns correspond to features and rows to the samples
  • Support for GPU through Cupy pip install cupy-cuda102 (Tested with CUDA 10.2)
  • JIT compiled functions when possible for efficiency

(back to top)

Roadmap

  • Weights and biases initialization
  • More activation functions
    • Identity, LeakyReLU, Tanh, etc.
  • More loss functions
    • categorical cross entropy, and others
  • Optimization algorithms apart from Stochastic Gradient Descent, like ADAM, RMSprop, etc.
  • Implement regularizers
  • Batch normalization
  • Dropout
  • Early stopping
  • A predict function that returns the output of the last layer and the loss/accuracy
  • Some metric functions, although there is no harm in using sklearn for that

See the open issues for a full list of proposed features (and known issues).

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Contact

Manas Sharma - @manassharma07 - [email protected]

Project Link: https://github.com/manassharma07/crysx_nn

Project Documentation: https://bragitoff.com

Blog: https://bragitoff.com

(back to top)

Acknowledgments

(back to top)

Citation

If you use this library and would like to cite it, you can use:

 M. Sharma, "CrysX-NN: Neural Network libray", 2021. [Online]. Available: https://github.com/manassharma07/crysx_nn. [Accessed: DD- Month- 20YY].

or:

@Misc{,
  author = {Manas Sharma},
  title  = {CrysX-NN: Neural Network libray},
  month  = december,
  year   = {2021},
  note   = {Online; accessed 
   
    },
  url    = {https://github.com/manassharma07/crysx_nn},
}

   

(back to top)

You might also like...
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

Tensors and Dynamic neural networks in Python with strong GPU acceleration
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

CPU inference engine that delivers unprecedented performance for sparse models
CPU inference engine that delivers unprecedented performance for sparse models

The DeepSparse Engine is a CPU runtime that delivers unprecedented performance by taking advantage of natural sparsity within neural networks to reduce compute required as well as accelerate memory bound workloads. It is focused on model deployment and scaling machine learning pipelines, fitting seamlessly into your existing deployments as an inference backend.

Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.
Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.

human-pose-estimation-3d-python-cpp RealSenseD435 (RGB) 480x640 + CPU Corei9 45 FPS (Depth is not used) 1. Run 1-1. RealSenseD435 (RGB) 480x640 + CPU

BlockUnexpectedPackets - Preventing BungeeCord CPU overload due to Layer 7 DDoS attacks by scanning BungeeCord's logs

BlockUnexpectedPackets This script automatically blocks DDoS attacks that are sp

Neural-net-from-scratch - A simple Neural Network from scratch in Python using the Pymathrix library

A Simple Neural Network from scratch A Simple Neural Network from scratch in Pyt

XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale

XtremeDistilTransformers for Distilling Massive Multilingual Neural Networks ACL 2020 Microsoft Research [Paper] [Video] Releasing [XtremeDistilTransf

TorchPQ is a python library for Approximate Nearest Neighbor Search (ANNS) and Maximum Inner Product Search (MIPS) on GPU using Product Quantization (PQ) algorithm. [ICLR 2021]
[ICLR 2021] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin

CPT: Efficient Deep Neural Network Training via Cyclic Precision Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin Accep

Comments
  • NAN loss or loss gradient when using Binary Cross Entropy or Categorical Cross Entropy sometimes

    NAN loss or loss gradient when using Binary Cross Entropy or Categorical Cross Entropy sometimes

    This is a strange bug, where using a batch_size like 32 or smaller results in nan values in the gradient of loss calculations. But the bug is not there when using a larger batch size like 60-200.

    The bug was observed when training on MNIST dataset.

    Using ReLU (Hidden, size=256) and Softmax (Output, size=10) activation layers.

    bug 
    opened by manassharma07 2
  • Reduce the number of parameters required for `nn_optimize` function

    Reduce the number of parameters required for `nn_optimize` function

    Add some defaults to parameters, it could even be None and then if the value of a parameters remains None, i.e., the user didn't provide them, then use our default values.

    For example,

    • [ ] for batchSize we can use: min(32,nSamples)

    • [ ] for weights and biases we can use an initialisation function.

    enhancement 
    opened by manassharma07 2
Releases(v_0.1.7)
  • v_0.1.7(Jan 16, 2022)

    FInalized the example for MNIST and MNIST_Plus.

    Added the ability to calculate accuracy during training as well as during prediction.

    Added confusion matrix calculation and visualization functions in utils.py.

    Source code(tar.gz)
    Source code(zip)
  • v_0.1.6(Jan 2, 2022)

    1. Both forward feed and back propagation are now significantly faster, for both NumPy and Cupy versions.
    2. Furthermore, several more activation and loss functions are also available now.
    Source code(tar.gz)
    Source code(zip)
  • v_0.1.5(Dec 27, 2021)

    Support for CUDA is here via Cupy.

    Slower than CPU for smaller networks but the benefits are very evident for larger networks with more than 1.5 Million parameters.

    Tested on

    • XPS i7 11800H + 3050 Ti,
    • Google Colab K80
    • Kaggle
    Source code(tar.gz)
    Source code(zip)
  • v_0.1.2(Dec 25, 2021)

  • v_0.1.1(Dec 25, 2021)

  • v_0.0.1(Dec 25, 2021)

Owner
Manas Sharma
Physicist
Manas Sharma
Miscellaneous and lightweight network tools

Network Tools Collection of miscellaneous and lightweight network tools to simplify daily operations, administration, and troubleshooting of networks.

Nicholas Russo 22 Mar 22, 2022
This repository is the official implementation of Using Time-Series Privileged Information for Provably Efficient Learning of Prediction Models

Using Time-Series Privileged Information for Provably Efficient Learning of Prediction Models Link to paper Abstract We study prediction of future out

Rickard Karlsson 2 Aug 19, 2022
A pyparsing-based library for parsing SOQL statements

CONTRIBUTORS WANTED!! Installation pip install python-soql-parser or, with poetry poetry add python-soql-parser Usage from python_soql_parser import p

Kicksaw 0 Jun 07, 2022
Self-Supervised Pre-Training for Transformer-Based Person Re-Identification

Self-Supervised Pre-Training for Transformer-Based Person Re-Identification [pdf] The official repository for Self-Supervised Pre-Training for Transfo

Hao Luo 116 Jan 04, 2023
Framework for training options with different attention mechanism and using them to solve downstream tasks.

Using Attention in HRL Framework for training options with different attention mechanism and using them to solve downstream tasks. Requirements GPU re

5 Nov 03, 2022
Time-stretch audio clips quickly with PyTorch (CUDA supported)! Additional utilities for searching efficient transformations are included.

Time-stretch audio clips quickly with PyTorch (CUDA supported)! Additional utilities for searching efficient transformations are included.

Kento Nishi 22 Jul 07, 2022
Speckle-free Holography with Partially Coherent Light Sources and Camera-in-the-loop Calibration

Speckle-free Holography with Partially Coherent Light Sources and Camera-in-the-loop Calibration Project Page | Paper Yifan Peng*, Suyeon Choi*, Jongh

Stanford Computational Imaging Lab 19 Dec 11, 2022
Implementation of paper "Self-supervised Learning on Graphs:Deep Insights and New Directions"

SelfTask-GNN A PyTorch implementation of "Self-supervised Learning on Graphs: Deep Insights and New Directions". [paper] In this paper, we first deepe

Wei Jin 85 Oct 13, 2022
基于深度强化学习的原神自动钓鱼AI

原神自动钓鱼AI由YOLOX, DQN两部分模型组成。使用迁移学习,半监督学习进行训练。 模型也包含一些使用opencv等传统数字图像处理方法实现的不可学习部分。

4.2k Jan 01, 2023
SCAAML is a deep learning framwork dedicated to side-channel attacks run on top of TensorFlow 2.x.

SCAAML (Side Channel Attacks Assisted with Machine Learning) is a deep learning framwork dedicated to side-channel attacks. It is written in python and run on top of TensorFlow 2.x.

Google 69 Dec 21, 2022
Open source implementation of AceNAS: Learning to Rank Ace Neural Architectures with Weak Supervision of Weight Sharing

AceNAS This repo is the experiment code of AceNAS, and is not considered as an official release. We are working on integrating AceNAS as a built-in st

Yuge Zhang 6 Sep 07, 2022
PyTorch implementation for NED. It can be used to manipulate the facial emotions of actors in videos based on emotion labels or reference styles.

Neural Emotion Director (NED) - Official Pytorch Implementation Example video of facial emotion manipulation while retaining the original mouth motion

Foivos Paraperas 89 Dec 23, 2022
Geometry-Aware Learning of Maps for Camera Localization (CVPR2018)

Geometry-Aware Learning of Maps for Camera Localization This is the PyTorch implementation of our CVPR 2018 paper "Geometry-Aware Learning of Maps for

NVIDIA Research Projects 321 Nov 26, 2022
Medical Insurance Cost Prediction using Machine earning

Medical-Insurance-Cost-Prediction-using-Machine-learning - Here in this project, I will use regression analysis to predict medical insurance cost for people in different regions, and based on several

1 Dec 27, 2021
PyTorch implementation for the ICLR 2020 paper "Understanding the Limitations of Variational Mutual Information Estimators"

Smoothed Mutual Information ``Lower Bound'' Estimator PyTorch implementation for the ICLR 2020 paper Understanding the Limitations of Variational Mutu

50 Nov 09, 2022
Code to reproduce the results for Compositional Attention

Compositional-Attention This repository contains the official implementation for the paper Compositional Attention: Disentangling Search and Retrieval

Sarthak Mittal 58 Nov 30, 2022
Can we learn gradients by Hamiltonian Neural Networks?

Can we learn gradients by Hamiltonian Neural Networks? This project was carried out as part of the Optimization for Machine Learning course (CS-439) a

2 Aug 22, 2022
This is the pytorch re-implementation of the IterNorm

IterNorm-pytorch Pytorch reimplementation of the IterNorm methods, which is described in the following paper: Iterative Normalization: Beyond Standard

Lei Huang 32 Dec 27, 2022
PyTorch Implementation of AnimeGANv2

PyTorch implementation of AnimeGANv2

4k Jan 07, 2023
Learning from graph data using Keras

Steps to run = Download the cora dataset from this link : https://linqs.soe.ucsc.edu/data unzip the files in the folder input/cora cd code python eda

Mansar Youness 64 Nov 16, 2022