A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.

Related tags

Deep Learningcrysx_nn
Overview

Contributors Forks Stargazers Issues MIT License LinkedIn


crysx_nn

A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Features
  5. Roadmap
  6. Contributing
  7. License
  8. Contact
  9. Acknowledgments
  10. Citation

About The Project

Product Name Screen Shot

Neural networks are an integral part of machine learning. The project provides an easy-to-use, yet efficient implementation that can be used in your projects or for teaching/learning purposes.

The library is written in pure-python with some optimizations using numpy, opt_einsum, and numba when using CPU and cupy for CUDA support.

The goal was to create a framework that is efficient yet easy to understand, so that everyone can see and learn about what goes inside a neural network. After all, the project did spawn from a semester project on CP_IV: Machine Learning course at the University of Jena, Germany.

(back to top)

Built With

(back to top)

Getting Started

To get a local copy up and running follow these simple example steps.

Prerequisites

You need to have python3 installed along with pip.

Installation

There are many ways to install crysx_nn

  1. Install the release (stable) version from PyPi
    pip install crysx_nn
  2. Install the latest development version, by cloning the git repo and installing it. This requires you to have git installed.
    git clone https://github.com/manassharma07/crysx_nn.git
    cd crysx_nn
    pip install .
  3. Install the latest development version without git.
    pip install --upgrade https://github.com/manassharma07/crysx_nn/tarball/main

Check if the installation was successful by running python shell and trying to import the package

python3
>> import crysx_nn >>> crysx_nn.__version__ '0.1.0' >>> ">
Python 3.7.11 (default, Jul 27 2021, 07:03:16) 
[Clang 10.0.0 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import crysx_nn
>>> crysx_nn.__version__
'0.1.0'
>>> 

Finally download the example script (here) for simulating logic gates like AND, XOR, NAND, and OR, and try running it

python Simluating_logic_gates.py

(back to top)

Usage

The most important thing for using this library properly is to use 2D NumPy arrays for defining the inputs and exoected outputs (targets) for a network. 1D arrays for inputs and targets are not supported and will result in an error.

For example, let us try to simulate the logic gate AND. The AND gate takes two input bits and returns a single input bit. The bits can take a value of either 0 or 1. The AND gate returns 1 only if both the inputs are 1, otherwise it returns 0.

The truth table of the AND gate is as follows

x1 x2 output
0 0 0
0 1 0
1 0 0
1 1 1

The four possible set of inputs are

inputs = np.array([[0.,0.,1.,1.],[0.,1.,0.,1.]]).T.astype('float32')
print(inputs)
print(inputs.dtype) 

Output:

[[0. 0.]
 [0. 1.]
 [1. 0.]
 [1. 1.]]
float32

Similarly, set the corresponding four possible outputs as a 2D numpy array

# AND outputs
outputAND = np.array([0.,0.,0.,1.]) # 1D array
outputAND = np.asarray([outputAND]).T # 2D array
print('AND outputs\n', outputAND)

Output:

AND outputs
 [[0.]
 [0.]
 [0.]
 [1.]]

Next, we need to set some parameters of our Neural network

nInputs = 2 # No. of nodes in the input layer
neurons_per_layer = [3,1] # Neurons per layer (excluding the input layer)
activation_func_names = ['Sigmoid', 'Sigmoid']
nLayers = len(neurons_per_layer)
eeta = 0.5 # Learning rate
nEpochs=10**4 # For stochastic gradient descent
batchSize = 4 # No. of input samples to process at a time for optimization

For a better understanding, let us visualize it.

visualize(nInputs, neurons_per_layer, activation_func_names)

Output:

Now let us initialize the weights and biases. Weights and biases are provided as lists of 2D and 1D NumPy arrays, respectively (1 Numpy array for each layer). In our case, we have 2 layers (1 hidden+ 1 output), therefore, the list of Weights and Biases will have 2 NumPy arrays each.

# Initial guesses for weights
w1 = 0.30
w2 = 0.55
w3 = 0.20
w4 = 0.45
w5 = 0.50
w6 = 0.35
w7 = 0.15
w8 = 0.40
w9 = 0.25

# Initial guesses for biases
b1 = 0.60
b2 = 0.05

# need to use a list instead of a numpy array, since the 
#weight matrices at each layer are not of the same dimensions
weights = [] 
# Weights for layer 1 --> 2
weights.append(np.array([[w1,w4],[w2, w5], [w3, w6]]))
# Weights for layer 2 --> 3
weights.append(np.array([[w7, w8, w9]]))
# List of biases at each layer
biases = []
biases.append(np.array([b1,b1,b1]))
biases.append(np.array([b2]))

weightsOriginal = weights
biasesOriginal = biases

print('Weights matrices: ',weights)
print('Biases: ',biases)

Output:

Weights matrices:  [array([[0.3 , 0.45],
       [0.55, 0.5 ],
       [0.2 , 0.35]]), array([[0.15, 0.4 , 0.25]])]
Biases:  [array([0.6, 0.6, 0.6]), array([0.05])]

Finally it is time to train our neural network. We will use mean squared error (MSE) loss function as the metric of performance. Currently, only stochastic gradient descent is supported.

# Run optimization
optWeights, optBiases, errorPlot = nn_optimize_fast(inputs, outputAND, activation_func_names, nLayers, nEpochs=nEpochs, batchSize=batchSize, eeta=eeta, weights=weightsOriginal, biases=biasesOriginal, errorFunc=MSE_loss, gradErrorFunc=MSE_loss_grad,miniterEpoch=1,batchProgressBar=False,miniterBatch=100)

The function nn_optimize_fast returns the optimized weights and biases, as well as the error at each epoch of the optimization.

We can then plot the training loss at each epoch

# Plot the error vs epochs
plt.plot(errorPlot)
plt.yscale('log')
plt.show()

Output: For more examples, please refer to the Examples Section

CrysX-NN (crysx_nn) also provides CUDA support by using cupy versions of all the features ike activation functions, loss functions, neural network calculations, etc. Note: For small networks the Cupy versions may actually be slower than CPU versions. But the benefit becomes evident as you go beyond 1.5 Million parameters.

(back to top)

Features

  • Efficient implementations of activation functions and their gradients
    • Sigmoid, Sigmoid_grad
    • ReLU, ReLU_grad
    • Softmax, Softmax_grad
    • Softplus, Sofplus_grad
    • Tanh, Tanh_grad
    • Tanh_offset, Tanh_offset_grad
    • Identity, Identity_grad
  • Efficient implementations of loss functions and their gradients
    • Mean squared error
    • Binary cross entropy
  • Neural network optimization using
    • Stochastic Gradient Descent
  • Support for batched inputs, i.e., supplying a matrix of inputs where the collumns correspond to features and rows to the samples
  • Support for GPU through Cupy pip install cupy-cuda102 (Tested with CUDA 10.2)
  • JIT compiled functions when possible for efficiency

(back to top)

Roadmap

  • Weights and biases initialization
  • More activation functions
    • Identity, LeakyReLU, Tanh, etc.
  • More loss functions
    • categorical cross entropy, and others
  • Optimization algorithms apart from Stochastic Gradient Descent, like ADAM, RMSprop, etc.
  • Implement regularizers
  • Batch normalization
  • Dropout
  • Early stopping
  • A predict function that returns the output of the last layer and the loss/accuracy
  • Some metric functions, although there is no harm in using sklearn for that

See the open issues for a full list of proposed features (and known issues).

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Contact

Manas Sharma - @manassharma07 - [email protected]

Project Link: https://github.com/manassharma07/crysx_nn

Project Documentation: https://bragitoff.com

Blog: https://bragitoff.com

(back to top)

Acknowledgments

(back to top)

Citation

If you use this library and would like to cite it, you can use:

 M. Sharma, "CrysX-NN: Neural Network libray", 2021. [Online]. Available: https://github.com/manassharma07/crysx_nn. [Accessed: DD- Month- 20YY].

or:

@Misc{,
  author = {Manas Sharma},
  title  = {CrysX-NN: Neural Network libray},
  month  = december,
  year   = {2021},
  note   = {Online; accessed 
   
    },
  url    = {https://github.com/manassharma07/crysx_nn},
}

   

(back to top)

You might also like...
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

Tensors and Dynamic neural networks in Python with strong GPU acceleration
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

CPU inference engine that delivers unprecedented performance for sparse models
CPU inference engine that delivers unprecedented performance for sparse models

The DeepSparse Engine is a CPU runtime that delivers unprecedented performance by taking advantage of natural sparsity within neural networks to reduce compute required as well as accelerate memory bound workloads. It is focused on model deployment and scaling machine learning pipelines, fitting seamlessly into your existing deployments as an inference backend.

Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.
Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.

human-pose-estimation-3d-python-cpp RealSenseD435 (RGB) 480x640 + CPU Corei9 45 FPS (Depth is not used) 1. Run 1-1. RealSenseD435 (RGB) 480x640 + CPU

BlockUnexpectedPackets - Preventing BungeeCord CPU overload due to Layer 7 DDoS attacks by scanning BungeeCord's logs

BlockUnexpectedPackets This script automatically blocks DDoS attacks that are sp

Neural-net-from-scratch - A simple Neural Network from scratch in Python using the Pymathrix library

A Simple Neural Network from scratch A Simple Neural Network from scratch in Pyt

XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale

XtremeDistilTransformers for Distilling Massive Multilingual Neural Networks ACL 2020 Microsoft Research [Paper] [Video] Releasing [XtremeDistilTransf

TorchPQ is a python library for Approximate Nearest Neighbor Search (ANNS) and Maximum Inner Product Search (MIPS) on GPU using Product Quantization (PQ) algorithm. [ICLR 2021]
[ICLR 2021] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin

CPT: Efficient Deep Neural Network Training via Cyclic Precision Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin Accep

Comments
  • NAN loss or loss gradient when using Binary Cross Entropy or Categorical Cross Entropy sometimes

    NAN loss or loss gradient when using Binary Cross Entropy or Categorical Cross Entropy sometimes

    This is a strange bug, where using a batch_size like 32 or smaller results in nan values in the gradient of loss calculations. But the bug is not there when using a larger batch size like 60-200.

    The bug was observed when training on MNIST dataset.

    Using ReLU (Hidden, size=256) and Softmax (Output, size=10) activation layers.

    bug 
    opened by manassharma07 2
  • Reduce the number of parameters required for `nn_optimize` function

    Reduce the number of parameters required for `nn_optimize` function

    Add some defaults to parameters, it could even be None and then if the value of a parameters remains None, i.e., the user didn't provide them, then use our default values.

    For example,

    • [ ] for batchSize we can use: min(32,nSamples)

    • [ ] for weights and biases we can use an initialisation function.

    enhancement 
    opened by manassharma07 2
Releases(v_0.1.7)
  • v_0.1.7(Jan 16, 2022)

    FInalized the example for MNIST and MNIST_Plus.

    Added the ability to calculate accuracy during training as well as during prediction.

    Added confusion matrix calculation and visualization functions in utils.py.

    Source code(tar.gz)
    Source code(zip)
  • v_0.1.6(Jan 2, 2022)

    1. Both forward feed and back propagation are now significantly faster, for both NumPy and Cupy versions.
    2. Furthermore, several more activation and loss functions are also available now.
    Source code(tar.gz)
    Source code(zip)
  • v_0.1.5(Dec 27, 2021)

    Support for CUDA is here via Cupy.

    Slower than CPU for smaller networks but the benefits are very evident for larger networks with more than 1.5 Million parameters.

    Tested on

    • XPS i7 11800H + 3050 Ti,
    • Google Colab K80
    • Kaggle
    Source code(tar.gz)
    Source code(zip)
  • v_0.1.2(Dec 25, 2021)

  • v_0.1.1(Dec 25, 2021)

  • v_0.0.1(Dec 25, 2021)

Owner
Manas Sharma
Physicist
Manas Sharma
Histocartography is a framework bringing together AI and Digital Pathology

Documentation | Paper Welcome to the histocartography repository! histocartography is a python-based library designed to facilitate the development of

155 Nov 23, 2022
pytorch implementation for PointNet

PointNet.pytorch This repo is implementation for PointNet in pytorch. The model is in pointnet/model.py. It is teste

Fei Xia 1.7k Dec 30, 2022
Information Gain Filtration (IGF) is a method for filtering domain-specific data during language model finetuning. IGF shows significant improvements over baseline fine-tuning without data filtration.

Information Gain Filtration Information Gain Filtration (IGF) is a method for filtering domain-specific data during language model finetuning. IGF sho

4 Jul 28, 2022
This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021.

PyTorch implementation of DAQ This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021. For more informatio

CV Lab @ Yonsei University 36 Nov 04, 2022
Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data

Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data This is the official PyTorch implementation of the SeCo paper: @articl

ElementAI 101 Dec 12, 2022
AI Face Mesh: This is a simple face mesh detection program based on Artificial intelligence.

AI Face Mesh: This is a simple face mesh detection program based on Artificial Intelligence which made with Python. It's able to detect 468 different

Md. Rakibul Islam 1 Jan 13, 2022
tensorrt int8 量化yolov5 4.0 onnx模型

onnx模型转换为 int8 tensorrt引擎

123 Dec 28, 2022
Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch

Cross Transformers - Pytorch (wip) Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch Install $ pip install cross-t

Phil Wang 40 Dec 22, 2022
MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space

Update (20 Jan 2020): MODALS on text data is avialable MODALS MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space Table of Conte

38 Dec 15, 2022
SCNet: Learning Semantic Correspondence

SCNet Code Region matching code is contributed by Kai Han ([email protected]). Dense

Kai Han 34 Sep 06, 2022
2021搜狐校园文本匹配算法大赛 分比我们低的都是帅哥队

sohu_text_matching 2021搜狐校园文本匹配算法大赛Top2:分比我们低的都是帅哥队 本repo包含了本次大赛决赛环节提交的代码文件及答辩PPT,提交的模型文件可在百度网盘获取(链接:https://pan.baidu.com/s/1T9FtwiGFZhuC8qqwXKZSNA ,

hflserdaniel 43 Oct 01, 2022
Extreme Rotation Estimation using Dense Correlation Volumes

Extreme Rotation Estimation using Dense Correlation Volumes This repository contains a PyTorch implementation of the paper: Extreme Rotation Estimatio

Ruojin Cai 29 Nov 18, 2022
Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image

Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image (Project page) Zhengqin Li, Mohammad Sha

209 Jan 05, 2023
CC-GENERATOR - A python script for generating CC

CC-GENERATOR A python script for generating CC NOTE: This tool is for Educationa

Lêkzï 6 Oct 14, 2022
TorchGeo is a PyTorch domain library, similar to torchvision, that provides datasets, transforms, samplers, and pre-trained models specific to geospatial data.

TorchGeo is a PyTorch domain library, similar to torchvision, that provides datasets, transforms, samplers, and pre-trained models specific to geospatial data.

Microsoft 1.3k Dec 30, 2022
Generic U-Net Tensorflow implementation for image segmentation

Tensorflow Unet Warning This project is discontinued in favour of a Tensorflow 2 compatible reimplementation of this project found under https://githu

Joel Akeret 1.8k Dec 10, 2022
Group-Free 3D Object Detection via Transformers

Group-Free 3D Object Detection via Transformers By Ze Liu, Zheng Zhang, Yue Cao, Han Hu, Xin Tong. This repo is the official implementation of "Group-

Ze Liu 213 Dec 07, 2022
KinectFusion implemented in Python with PyTorch

KinectFusion implemented in Python with PyTorch This is a lightweight Python implementation of KinectFusion. All the core functions (TSDF volume, fram

Jingwen Wang 80 Jan 03, 2023
Official implementation of "Motif-based Graph Self-Supervised Learning forMolecular Property Prediction"

Motif-based Graph Self-Supervised Learning for Molecular Property Prediction Official Pytorch implementation of NeurIPS'21 paper "Motif-based Graph Se

zaixi 71 Dec 20, 2022
Official TensorFlow code for the forthcoming paper

~ Efficient-CapsNet ~ Are you tired of over inflated and overused convolutional neural networks? You're right! It's time for CAPSULES :)

Vittorio Mazzia 203 Jan 08, 2023