The official implementation of the Hybrid Self-Attention NEAT algorithm

Overview

REPLES LOGO

PUREPLES - Pure Python Library for ES-HyperNEAT

About

This is a library of evolutionary algorithms with a focus on neuroevolution, implemented in pure python, depending on the neat-python implementation. It contains a faithful implementation of both HyperNEAT and ES-HyperNEAT which are briefly described below.

NEAT (NeuroEvolution of Augmenting Topologies) is a method developed by Kenneth O. Stanley for evolving arbitrary neural networks.
HyperNEAT (Hypercube-based NEAT) is a method developed by Kenneth O. Stanley utilizing NEAT. It is a technique for evolving large-scale neural networks using the geometric regularities of the task domain.
ES-HyperNEAT (Evolvable-substrate HyperNEAT) is a method developed by Sebastian Risi and Kenneth O. Stanley utilizing HyperNEAT. It is a technique for evolving large-scale neural networks using the geometric regularities of the task domain. In contrast to HyperNEAT, the substrate used during evolution is able to evolve. This rids the user of some initial work and often creates a more suitable substrate.

The library is extensible in regards to easy transition between experimental domains.

Getting started

This section briefly describes how to install and run experiments.

Installation Guide

First, make sure you have the dependencies installed: numpy, neat-python, graphviz, matplotlib and gym.
All the above can be installed using pip.
Next, download the source code and run setup.py (pip install .) from the root folder. Now you're able to use PUREPLES!

Experimenting

How to experiment using NEAT will not be described, since this is the responsibility of the neat-python library.

Setting up an experiment for HyperNEAT:

  • Define a substrate with input nodes and output nodes as a list of tuples. The hidden nodes is a list of lists of tuples where the inner lists represent layers. The first list is the topmost layer, the last the bottommost.
  • Create a configuration file defining various NEAT specific parameters which are used for the CPPN.
  • Define a fitness function setting the fitness of each genome. This is where the CPPN and the ANN is constructed for each generation - use the create_phenotype_network method from the hyperneat module.
  • Create a population with the configuration file made in (2).
  • Run the population with the fitness function made in (3) and the configuration file made in (2). The output is the genome solving the task or the one closest to solving it.

Setting up an experiment for ES-HyperNEAT: Use the same setup as HyperNEAT except for:

  • Not declaring hidden nodes when defining the substrate.
  • Declaring ES-HyperNEAT specific parameters.
  • Using the create_phenotype_network method residing in the es_hyperneat module when creating the ANN.

If one is trying to solve an experiment defined by the OpenAI Gym it is even easier to experiment. In the shared module a file called gym_runner is able to do most of the work. Given the number of generations, the environment to run, a configuration file, and a substrate, the relevant runner will take care of everything regarding population, fitness function etc.

Please refer to the sample experiments included for further details on experimenting.

Comments
  • The query_cppn function returns a value of discontinuity range

    The query_cppn function returns a value of discontinuity range

    Hi,

    I have a bit of improvement point about the query_cppn function in hyperneat.py. In line 85-88, a value below the threshold is replaced with 0.0, so that range [-0.2, 0.2] of the value drop out in this implementation.

    However, the original paper (http://axon.cs.byu.edu/Dan/778/papers/NeuroEvolution/stanley3**.pdf) says "The magnitude of weights above this threshold are scaled to be between zero and a maximum magnitude in the substrate." on page 8.

    Thus, I suggest changing the query_cppn function like it returns a value of continuity range [-max_val, max_val].

    opened by yamatakeru 14
  • Config always finds 5 inputs. [RuntimeError: Expected 840 inputs, got 5]

    Config always finds 5 inputs. [RuntimeError: Expected 840 inputs, got 5]

     ****** Running generation 0 ******
    
    Traceback (most recent call last):
      File "c:\Users\Silver\.vscode\extensions\ms-python.python-2020.2.64397\pythonFiles\ptvsd_launcher.py", line 48, in <module>
        main(ptvsdArgs)
      File "c:\Users\Silver\.vscode\extensions\ms-python.python-2020.2.64397\pythonFiles\lib\python\old_ptvsd\ptvsd\__main__.py", line 432, in main
        run()
      File "c:\Users\Silver\.vscode\extensions\ms-python.python-2020.2.64397\pythonFiles\lib\python\old_ptvsd\ptvsd\__main__.py", line 316, in run_file
        runpy.run_path(target, run_name='__main__')
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 263, in run_path
        pkg_name=pkg_name, script_name=fname)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 96, in _run_module_code
        mod_name, mod_spec, pkg_name, script_name)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "g:\Emulators\ML AI open AI\env2.py", line 51, in <module>
        winner = run(200, env)[0]
      File "g:\Emulators\ML AI open AI\env2.py", line 37, in run
        winner, stats = run_es(gens, env, 200, config, params, sub, max_trials=200)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\pureples\shared\gym_runner.py", line 50, in run_es
        pop.run(eval_fitness, gens)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\neat\population.py", line 89, in run
        fitness_function(list(iteritems(self.population)), self.config)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\pureples\shared\gym_runner.py", line 25, in eval_fitness
        net = network.create_phenotype_network()
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\pureples\es_hyperneat\es_hyperneat.py", line 46, in create_phenotype_network
        hidden_nodes, connections = self.es_hyperneat()
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\pureples\es_hyperneat\es_hyperneat.py", line 151, in es_hyperneat
        root = self.division_initialization((x, y), True)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\pureples\es_hyperneat\es_hyperneat.py", line 110, in division_initialization
        c.w = query_cppn(coord, (c.x, c.y), outgoing, self.cppn, self.max_weight)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\pureples\hyperneat\hyperneat.py", line 84, in query_cppn
        w = cppn.activate(i)[0]
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\neat\nn\feed_forward.py", line 14, in activate
        raise RuntimeError("Expected {0:n} inputs, got {1:n}".format(len(self.input_nodes), len(inputs)))
    RuntimeError: Expected 840 inputs, got 5
    

    I ran this through the Debugger and found that at some point some random float values replace the existing number of inputs that initially gets set.

    I could even see that at some point during execution the correct number of inputs was actually used.

    I've been fighting to find the cause and I've come to the conclusion that something has to be wrong in the module.

    for some context, I took one of the examples and attempted to configure it to run a gym retro env.

    As you can see though the only thing stopping me is the inputs being messed up somehow.

    If you need more information please let me know.

    opened by SilverDash 12
  • Question about discrete gym runner observation space

    Question about discrete gym runner observation space

    Hi!

    Very cool project, thanks for making it available. I have a toy project I am working on with Gym for function approximation, and which is a discrete-valued observation space consisting of 12 integers; action space is also discrete-valued, three integers used to determine the correct agent action based on the sequence of 12 integers.

    So does pureples support discrete observation and action spaces, and would the cartpole experiment make for a good starting point for this?

    Thanks in advance!

    opened by pablogranolabar 5
  • Line 169 in es_hyperneat.py is different from the algorithm in the original paper

    Line 169 in es_hyperneat.py is different from the algorithm in the original paper

    Hi,

    The following part seems to be different from the algorithm in https://eplex.cs.ucf.edu/papers/risi_alife12.pdf.

    160 | for i in range(self.iteration_level):  # Explore from hidden.
    161 |     for x, y in unexplored_hidden_nodes:
    162 |         root = self.division_initialization((x, y), True)
    163 |         self.pruning_extraction((x, y), root, True)
    164 |         connections2 = connections2.union(self.connections)
    165 |         for c in connections2:
    166 |             hidden_nodes.add((c.x2, c.y2))
    167 |         self.connections = set()
    168 | 
    169 | unexplored_hidden_nodes -= hidden_nodes
    

    According to pseudocode on page 47, line 169 should be indented once again. Also, unexplored_hidden_nodes will always be the empty set if we remove hidden_nodes from unexplored_hidden_nodes (because hidden_nodes is always greater than unexplored_hidden_nodes). I think it needs to be corrected as follows.

    160 | for i in range(self.iteration_level):  # Explore from hidden.
    161 |     for x, y in unexplored_hidden_nodes:
    162 |         root = self.division_initialization((x, y), True)
    163 |         self.pruning_extraction((x, y), root, True)
    164 |         connections2 = connections2.union(self.connections)
    165 |         for c in connections2:
    166 |             hidden_nodes.add((c.x2, c.y2))
    167 |         self.connections = set()
    168 | 
    169 - unexplored_hidden_nodes -= hidden_nodes
        +     unexplored_hidden_nodes = hidden_nodes - unexplored_hidden_nodes
    
    opened by yamatakeru 3
  • ES-HyperNEAT for OpenAI-Gyms SpaceInvader

    ES-HyperNEAT for OpenAI-Gyms SpaceInvader

    Hey,

    First of all you did great work, easy to use and understand! What I am trying to do is, using ES-HyperNEAT to exploit the Geometrical Informations in the Picture's Pixels of an Atari Game. OpenAI Gym gives an observationspace of (210, 160, 3), i have downsized it to (84, 84, 1) without colours. These are 7056 input-Nodes, instead of 100800.

    Now the Problem is that the outputs of the substrate's outputnodes are always Zero.

    The Input Layout is:

    for y in range(1,85):
    	for x in range(1,85):
    		input_coordinates.append((x , y))
    

    Is there some configuration in the CPPN i should watch out for, is the substrate too large, or is there a max Range for the Node-Placment in the substrat (exp just between -1, 1)?

    Thanks in advance!

    opened by Multiv4c 3
  • Question about inference with evolved ANN

    Question about inference with evolved ANN

    Hi @ukuleleplayer,

    I've been working on a PUREPLES-based project with your gym runner but I can't find any resources on inference with an evolved ANN? It looks like the phenotype gets pickled and model saved whenever the reward in +1., but what type of model format is that in and how to deploy for inference tasks?

    What I want to do is implement an additional loop whenever a +1. reward is found, to test it n more times to see if it has generalized to other examples.

    And does it make sense to restart an episode on each of those saved pickles for subsequent runs?

    TIA!

    opened by pablogranolabar 2
  • Connection's __eq__ does not return a boolean in es_hyperneat.py.

    Connection's __eq__ does not return a boolean in es_hyperneat.py.

    Hi.

    Connection's __eq__ is expected to return a boolean, but it returns a tuple (float, float, float, bool, float, float, float). However, the library seems to be working correctly at first glance.

    Tentatively, I will create a PR.

    opened by yamatakeru 2
  • Missing list() in es_hyperneat.py / unsupported operand type(s) for +: 'range' and 'range'

    Missing list() in es_hyperneat.py / unsupported operand type(s) for +: 'range' and 'range'

    Hi, I think in es_hyperneat.py on line 30/31 the ranges for the input- and output_nodes should be transformed to a list with list().

    Otherwise return neat.nn.RecurrentNetwork(input_nodes, output_nodes, node_evals) throws an error: unsupported operand type(s) for +: 'range' and 'range'

    Without that change skripts like es_hyperneat_xor_large.py do not work.

    The same problem seems to appear in hyperneat.py

    opened by DaKnick 2
  • The relationship between ESNetwork.activations and max_depth

    The relationship between ESNetwork.activations and max_depth

    Could anyone please explain the following line of code in es_hyperneat.py?

            # Number of layers in the network.
            self.activations = 2 ** params["max_depth"] + 1
    

    Thank you very much.

    opened by lester1027 1
  • network.create_phenotype_network() executing for more than 30 minutes when input and output sizes are (49360,) and (1024,) respectively

    network.create_phenotype_network() executing for more than 30 minutes when input and output sizes are (49360,) and (1024,) respectively

    I have been trying to use ES-Hyperneat on a custom environment. The size of input to ES-Network is (49360,) and for output is (1024,). The "net = network.create_phenotype_network()" method is sometimes taking more than 30 minutes to execute for a single genome. Does it mean that the larger the size of input and output of network the more time it will take to create network?

    Is there any solution for this?

    opened by Abdul-Wahab-mc 1
  • Multiple activation function support for ES-HyperNEAT?

    Multiple activation function support for ES-HyperNEAT?

    Hi @ukuleleplayer

    I've noticed that all of the examples use sigmoid activation functions for ES-HyperNEAT; is the use of multiple activation function at the per-neuron level possible with PUREPLES?

    Or any activation function other than sigmoid for ES-HyperNEAT?

    TIA

    opened by pablogranolabar 1
  • Question about run_hyper()

    Question about run_hyper()

    Hi, first of all thank you for your library, it's great! I am going through the code trying to understand what each step does, regarding the pole balancing environment. There is a point that really leaves me confused: in run_hyper(), it seems we create the population and test it for one trial, then again for 10 trials, and then for max_trials trials. Any reason to do that? Thanks

    opened by ValerioB88 0
Releases(v0.0-alpha)
Owner
Adrian Westh
Data Conscious Software Developer
Adrian Westh
Supervised Contrastive Learning for Downstream Optimized Sequence Representations

SupCL-Seq 📖 Supervised Contrastive Learning for Downstream Optimized Sequence representations (SupCS-Seq) accepted to be published in EMNLP 2021, ext

Hooman Sedghamiz 18 Oct 21, 2022
NeuPy is a Tensorflow based python library for prototyping and building neural networks

NeuPy v0.8.2 NeuPy is a python library for prototyping and building neural networks. NeuPy uses Tensorflow as a computational backend for deep learnin

Yurii Shevchuk 729 Jan 03, 2023
REGTR: End-to-end Point Cloud Correspondences with Transformers

REGTR: End-to-end Point Cloud Correspondences with Transformers This repository contains the source code for REGTR. REGTR utilizes multiple transforme

Zi Jian Yew 108 Dec 17, 2022
The code of paper 'Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo Collection'

Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo Collection Pytorch implemetation of paper 'Learning to Aggregate and Personalize

Tencent YouTu Research 136 Dec 29, 2022
This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”

This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?” Usage To replicate our results in Secti

Albert Webson 64 Dec 11, 2022
This is a work in progress reimplementation of Instant Neural Graphics Primitives

Neural Hash Encoding This is a work in progress reimplementation of Instant Neural Graphics Primitives Currently this can train an implicit representa

Penn 79 Sep 01, 2022
Only works with the dashboard version / branch of jesse

Jesse optuna Only works with the dashboard version / branch of jesse. The config.yml should be self-explainatory. Installation # install from git pip

Markus K. 8 Dec 04, 2022
Official Pytorch Implementation of Unsupervised Image Denoising with Frequency Domain Knowledge

Unsupervised Image Denoising with Frequency Domain Knowledge (BMVC 2021 Oral) : Official Project Page This repository provides the official PyTorch im

Donggon Jang 12 Sep 26, 2022
for a paper about leveraging discourse markers for training new models

TSLM-DISCOURSE-MARKERS Scope This repository contains: (1) Code to extract discourse markers from wikipedia (TSA). (1) Code to extract significant dis

International Business Machines 6 Nov 02, 2022
Active learning for Mask R-CNN in Detectron2

MaskAL - Active learning for Mask R-CNN in Detectron2 Summary MaskAL is an active learning framework that automatically selects the most-informative i

49 Dec 20, 2022
A PyTorch Implementation of FaceBoxes

FaceBoxes in PyTorch By Zisian Wong, Shifeng Zhang A PyTorch implementation of FaceBoxes: A CPU Real-time Face Detector with High Accuracy. The offici

Zi Sian Wong 797 Dec 17, 2022
Diffgram - Supervised Learning Data Platform

Data Annotation, Data Labeling, Annotation Tooling, Training Data for Machine Learning

Diffgram 1.6k Jan 07, 2023
PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation.

PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation. Warning: the master branch might collapse. To ob

559 Dec 14, 2022
On Evaluation Metrics for Graph Generative Models

On Evaluation Metrics for Graph Generative Models Authors: Rylee Thompson, Boris Knyazev, Elahe Ghalebi, Jungtaek Kim, Graham Taylor This is the offic

13 Jan 07, 2023
This is the official PyTorch implementation of our paper: "Artistic Style Transfer with Internal-external Learning and Contrastive Learning".

Artistic Style Transfer with Internal-external Learning and Contrastive Learning This is the official PyTorch implementation of our paper: "Artistic S

51 Dec 20, 2022
Container : Context Aggregation Network

Container : Context Aggregation Network If you use this code for a paper please cite: @article{gao2021container, title={Container: Context Aggregati

AI2 47 Dec 16, 2022
Extreme Rotation Estimation using Dense Correlation Volumes

Extreme Rotation Estimation using Dense Correlation Volumes This repository contains a PyTorch implementation of the paper: Extreme Rotation Estimatio

Ruojin Cai 29 Nov 18, 2022
Face Transformer for Recognition

Face-Transformer This is the code of Face Transformer for Recognition (https://arxiv.org/abs/2103.14803v2). Recently there has been great interests of

Zhong Yaoyao 153 Nov 30, 2022
DeepStruc is a Conditional Variational Autoencoder which can predict the mono-metallic nanoparticle from a Pair Distribution Function.

ChemRxiv | [Paper] XXX DeepStruc Welcome to DeepStruc, a Deep Generative Model (DGM) that learns the relation between PDF and atomic structure and the

Emil Thyge Skaaning Kjær 13 Aug 01, 2022
This repository contains the source codes for the paper AtlasNet V2 - Learning Elementary Structures.

AtlasNet V2 - Learning Elementary Structures This work was build upon Thibault Groueix's AtlasNet and 3D-CODED projects. (you might want to have a loo

Théo Deprelle 123 Nov 11, 2022