Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...

Overview


Automatic, Readable, Reusable, Extendable

Machin is a reinforcement library designed for pytorch.


Build status

Platform Status
Linux Jenkins build
Windows Windows build

Supported Models


Anything, including recurrent networks.

Supported algorithms


Currently Machin has implemented the following algorithms, the list is still growing:

Single agent algorithms:

Multi-agent algorithms:

Immitation learning algorithms (Behavioral Cloning, Inverse RL, GAIL)

Massively parallel algorithms:

Enhancements:

Algorithms to be supported:

Features


1. Automatic

Starting from version 0.4.0, Machin now supports automatic config generation, you can get a configuration through:

python -m machin.auto generate --algo DQN --env openai_gym --output config.json

And automatically launch the experiment with pytorch lightning:

python -m machin.auto launch --config config.json

2. Readable

Compared to other reinforcement learning libraries such as the famous rlpyt, ray, and baselines. Machin tries to just provide a simple, clear implementation of RL algorithms.

All algorithms in Machin are designed with minimial abstractions and have very detailed documents, as well as various helpful tutorials.

3. Reusable

Machin takes a similar approach to that of pytorch, encasulating algorithms, data structures in their own classes. Users do not need to setup a series of data collectors, trainers, runners, samplers... to use them, just import.

The only restriction placed on your models is their input / output format, however, these restrictions are minimal, making it easy to adapt algorithms to your custom environments.

4. Extendable

Machin is built upon pytorch, it and thanks to its powerful rpc api, we may construct complex distributed programs. Machin provides implementations for enhanced parallel execution pools, automatic model assignment, role based rpc scaling, rpc service discovery and registration, etc.

Upon these core functions, Machin is able to provide tested high-performance distributed training algorithm implementations, such as A3C, APEX, IMPALA, to ease your design.

5. Reproducible

Machin is weakly reproducible, for each release, our test framework will directly train every RL framework, if any framework cannot reach the target score, the test will fail directly.

However, currently, the tests are not guaranteed to be exactly the same as the tests in original papers, due to the large variety of different environments used in original research papers.

Documentation


See here. Examples are located in examples.

Installation


Machin is hosted on PyPI. Python >= 3.6 and PyTorch >= 1.6.0 is required. You may install the Machin library by simply typing:

pip install machin

You are suggested to create a virtual environment first if you are using conda to manage your environments, to prevent PIP changes your packages without letting conda know.

conda create -n some_env pip
conda activate some_env
pip install machin

Note: Currently only a fraction of all functions is supported on platforms other than linux (mainly distributed algorithms), to test whether the code is running correctly, you can run the corresponding test script for your platform in the root directory:

run_win_test.bat
run_linux_test.sh
run_macos_test.sh

Some errors may occur due to incorrect setup of libraries, make sure you have installed graphviz etc.

Contributing


Any contribution would be great, don't hesitate to submit a PR request to us! Please follow the instructions in this file.

Issues


If you have any issues, please use the template markdown files in .github/ISSUE_TEMPLATE folder and format your issue before opening a new one. We would try our best to respond to your feature requests and problems.

Citing


We would be very grateful if you can cite our work in your publications:

@misc{machin,
  author = {Muhan Li},
  title = {Machin},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/iffiX/machin}},
}

Roadmap


Please see Roadmap for the exciting new features we are currently working on!

Owner
Iffi
CS student, interested in AI. Currently studying at Northwestern University.
Iffi
[ICLR 2021] Is Attention Better Than Matrix Decomposition?

Enjoy-Hamburger 🍔 Official implementation of Hamburger, Is Attention Better Than Matrix Decomposition? (ICLR 2021) Under construction. Introduction T

Gsunshine 271 Dec 29, 2022
Vikrant Deshpande 1 Nov 17, 2022
a grammar based feedback fuzzer

Nautilus NOTE: THIS IS AN OUTDATE REPOSITORY, THE CURRENT RELEASE IS AVAILABLE HERE. THIS REPO ONLY SERVES AS A REFERENCE FOR THE PAPER Nautilus is a

Chair for Sys­tems Se­cu­ri­ty 158 Dec 28, 2022
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

2.6k Jan 04, 2023
Pytorch implementation for DFN: Distributed Feedback Network for Single-Image Deraining.

DFN:Distributed Feedback Network for Single-Image Deraining Abstract Recently, deep convolutional neural networks have achieved great success for sing

6 Nov 05, 2022
[NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training Code for NeurIPS 2021 paper "Better Safe Than Sorry: Preventing Delu

Lue Tao 29 Sep 20, 2022
PaRT: Parallel Learning for Robust and Transparent AI

PaRT: Parallel Learning for Robust and Transparent AI This repository contains the code for PaRT, an algorithm for training a base network on multiple

Mahsa 0 May 02, 2022
An implementation of the BADGE batch active learning algorithm.

Batch Active learning by Diverse Gradient Embeddings (BADGE) An implementation of the BADGE batch active learning algorithm. Details are provided in o

125 Dec 24, 2022
The AWS Certified SysOps Administrator

The AWS Certified SysOps Administrator – Associate (SOA-C02) exam is intended for system administrators in a cloud operations role who have at least 1 year of hands-on experience with deployment, man

Aiden Pearce 32 Dec 11, 2022
Source code of AAAI 2022 paper "Towards End-to-End Image Compression and Analysis with Transformers".

Towards End-to-End Image Compression and Analysis with Transformers Source code of our AAAI 2022 paper "Towards End-to-End Image Compression and Analy

37 Dec 21, 2022
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

Rao Muhammad Umer 6 Nov 14, 2022
pixelNeRF: Neural Radiance Fields from One or Few Images

pixelNeRF: Neural Radiance Fields from One or Few Images Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa UC Berkeley arXiv: http://arxiv.org/abs/2

Alex Yu 1k Jan 04, 2023
MLJetReconstruction - using machine learning to reconstruct jets for CMS

MLJetReconstruction - using machine learning to reconstruct jets for CMS The C++ data extraction code used here was based heavily on that foundv here.

ALPhA Davidson 0 Nov 17, 2021
hipCaffe: the HIP port of Caffe

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Cent

ROCm Software Platform 126 Dec 05, 2022
Official repository for the paper F, B, Alpha Matting

FBA Matting Official repository for the paper F, B, Alpha Matting. This paper and project is under heavy revision for peer reviewed publication, and s

Marco Forte 404 Jan 05, 2023
Generative Art Using Neural Visual Grammars and Dual Encoders

Generative Art Using Neural Visual Grammars and Dual Encoders Arnheim 1 The original algorithm from the paper Generative Art Using Neural Visual Gramm

DeepMind 231 Jan 05, 2023
Text Generation by Learning from Demonstrations

Text Generation by Learning from Demonstrations The README was last updated on March 7, 2021. The repo is based on fairseq (v0.9.?). Paper arXiv Prere

38 Oct 21, 2022
A higher performance pytorch implementation of DeepLab V3 Plus(DeepLab v3+)

A Higher Performance Pytorch Implementation of DeepLab V3 Plus Introduction This repo is an (re-)implementation of Encoder-Decoder with Atrous Separab

linhua 326 Nov 22, 2022
【CVPR 2021, Variational Inference Framework, PyTorch】 From Rain Generation to Rain Removal

From Rain Generation to Rain Removal (CVPR2021) Hong Wang, Zongsheng Yue, Qi Xie, Qian Zhao, Yefeng Zheng, and Deyu Meng [PDF&&Supplementary Material]

Hong Wang 48 Nov 23, 2022
Small repo describing how to use Hugging Face's Wav2Vec2 with PyCTCDecode

🤗 Transformers Wav2Vec2 + PyCTCDecode Introduction This repo shows how 🤗 Transformers can be used in combination with kensho-technologies's PyCTCDec

Patrick von Platen 102 Oct 22, 2022