Creating Artificial Life with Reinforcement Learning

Overview

Code and instructions for creating Artificial Life in a non-traditional way, namely with Reinforcement Learning instead of Evolutionary Algorithms.

Although Evolutionary Algorithms have shown to result in interesting behavior, they focus on learning across generations whereas behavior could also be learned during ones lifetime. This is where Reinforcement Learning comes in, which learns through a reward/punishment system that allows it to learn new behavior during its live time. Using Reinforcement Learning, entities learn to survive, reproduce, and make sure to maximize the fitness of their kin.

Table of Contents

  1. About the Project
  2. Getting Started
    2.1. Prerequisites
    2.2. Usage
    2.3. Google Colaboratory
  3. Environment
    3.1. Agents
    3.2. Observation
    3.3. Reward
    3.4. Algorithms
  4. Results
  5. Documentation
    5.1. Training
    5.2. Testing

1. About the Project

Back to ToC
The simulation above is a good summary of what this project is about. Entities move and learn independently, eat, attack other entities, and reproduce. This is all possible by applying Reinforcement Learning algorithms to each entity, such as DQN and PPO.

The general principle is simple, each entity starts by randomly executing some actions and will slowly learn, based on specific rewards, whether those actions helped or not. The entity is punished if the action is poor and rewarded if it was helpful.

It is then up to the entities to find a way to survive as long as possible while also making sure their kin is in good shape as possible.


2. Getting Started

Back to ToC

To get started, you will only need to install the requirements and fork/download the ReinLife package, together with the train.py and test.py files.

2.1. Prerequisites

To install the requirements, simply run the following:
pip install -r requirements.txt

2.2. Usage

Due to the many parameters within each model and the environment itself, it is advised to start with train.py and test.py. These files have been prepared such that you can run them as is.

Training

To train one or models, simply run:

from ReinLife.Models import PERD3QN
from ReinLife.Helpers import trainer

brains = [PERD3QN(), 
          PERD3QN()]

trainer(brains, n_episodes=15_000, update_interval=300, width=30, height=30, max_agents=100,
        visualize_results=True, print_results=False, static_families=False, training=True, save=True)

This will start training the models for 15_000 episodes. The most important variable here is static_families. If this is set to True, then there will be at most as many genes as the number of brains chosen. Thus, you will only see two colors. If you set this to False, then any number of genes will be created each with their own brain.

Testing

To test one or models, simply run:

from ReinLife import tester
from ReinLife.Models import DQN, D3QN, PERD3QN, PPO, PERDQN

main_brains = [PPO(load_model="pretrained/PPO/PPO/brain_gene_0.pt"),
               DQN(load_model="pretrained/DQN/DQN/brain_gene_0.pt", training=False),
               D3QN(load_model="pretrained/D3QN/D3QN/brain_gene_0.pt", training=False),
               PERD3QN(load_model="pretrained/PERD3QN/Static Families/PERD3QN/brain_gene_1.pt", training=False),
               PERDQN(load_model="pretrained/PERDQN/PERDQN/brain_gene_1.pt", training=False)]
tester(main_brains, width=30, height=30, max_agents=100, static_families=True, fps=10)

The models above are pre-trained (see results below).
You can choose any number of brains that you have trained previously. Note, make sure to set all models (except PPO) to training=False, otherwise it will demonstrate more random behavior.

2.3. Google Colaboratory

It is possible to run the training code in google colaboratory if you need more computing power. You start by installing pygame and cloning the repo:

!pip install pygame
!git clone https://github.com/MaartenGr/ReinLife.git
%cd ReinLife

After that, you are ready to run the training code:

from ReinLife.Models import PERD3QN
from ReinLife.Helpers import trainer

n_episodes = 15_000

brains = [PERD3QN(train_freq=10), PERD3QN(train_freq=10)]

env = trainer(brains, n_episodes=n_episodes, update_interval=300, width=30, height=30, max_agents=100,
        visualize_results=True, print_results=False, google_colab=True, render=False, static_families=True,
        training=True, save=True)

Then, simply look at the files on the left in ReinLife/experiments/... to find the experiment that was run.


3. Environment

Back to TOC

The environment is build upon a numpy matrix of size n * m where each grid has a pixel size of 24 by 24. Each location within the matrix represents a location which can be occupied by only a single entity.

3.1. Agents

Agents are entities or organisms in the simulation that can move, attack, reproduce, and act independently.

Each agent has the following characteristics:

  • Health
    • Starts at 200 and decreases with 10 each step
    • Their health cannot exceed 200
  • Age
    • Starts at 0 and increases 1 with each step
    • Their maximum age is 50, after which they die
  • Gene
    • Each agents is given a gene, which simply represents an integer
    • All their offspring have the same gene value
    • Any new agent that is created not through reproduction gets a new value
    • This gene is represented by the color of the body

An agent can perform one of the following eight actions:

  • Move one space left, right, up, or down
  • Attack in the left, right, up, or down direction

The order of action execution is as follows:

  • Attack -> Move -> Eat -> Reproduce

Movement

An agent can occupy any un-occupied space and, from that position, can move up, down, left or right. Entities cannot move diagonally. The environment has no walls, which means that if an entity moves left from the most left position in the numpy matrix, then it will move to the most right position. In other words, the environment is a fully-connected world.

Although the movement in itself is not complex, it becomes more difficult as multiple entities want to move into the same spot. For that reason, each entity checks whether the target coordinate is unoccupied and if no other entity wants to move in that space. It does this iteratively as the target coordinate changes if an entity cannot move.

Attacking

An agent can attack in one of four directions:

  • Up, Down, Left, or Right

They stand still if they attack. However, since it is the first thing they do, the other agent cannot move away. When the agent successfully attacks another agent, the other agent dies and the attacker increases its health. Moreover, if the agent successfully attacks another agent, its border becomes red.

(Re)production

Each agent learns continuously during its lifetime. The end of an episode is marked by the end of an agents life.

When a new entity is reproduced, it inherits its brain (RL-algorithm) from its parents.

When a new entity is produced, it inherits its brain (RL-algorithm) from one of the best agents we have seen so far. A list of 10 of the best agents is tracked during the simulation.

3.2. Observation

The field of view of each agent is a square surrounding the agent. Since the world is fully-connected, the agent can see "through" walls.

The input for the neural network can be see in the image below:

test

There are three grids of 7x7 (example shows 5x5) that each show a specific observation of the environment:

  • Health
    • Shows the health of all agents within the agent's fov
  • Kinship
    • Shows whether agents within the agent's fov are related to the agent
  • Nutrition
    • Shows the nutritrional value of food items within the agent's fov

Thus, there are 3 * (7 * 7) + 6 = 153 input values.

3.3. Reward

The reward structure is tricky as you want to minimize the amount you steer the entity towards certain behavior. For that reason, I've adopted a simple and straightforward fitness measure, namely:

test

Where r is the reward given to agent i at time t. The δ is the Kronecker delta which is one if the the gene of agent i, gi, equals the gene of agent j, gj, and zero otherwise. n is the total number of agents that are alive at time t. Thus, the reward essentially checks how many agents are alive that share a gene with agent i at time t and divides by the total number of agents alive.

The result is that an agent's behavior is only steered towards making sure its gene lives on for as long as possible.

3.4. Algorithms

Currently, the following algorithms are implemented that can be used as brains:

  • Deep Q Network (DQN)
  • Prioritized Experience Replay Deep Q Network (PER-DQN)
  • Double Dueling Deep Q Network (D3QN)
  • Prioritized Experience Replay Double Dueling Deep Q Network (PER-D3QN)
  • Proximal Policy Optimization (PPO)

4. Results

Back to TOC

In order to test the quality of the trained algorithms, I ran each algorithm independently against a copy of itself to test the speed at which they converge to a high fitness. Below, you can see all algorithms battling it out with PER-D3QN coming out on top. Note, this does not mean it is necessarily the best algorithm. It might have converged faster than others which limits their learning ability.

test

Moreover, for each algorithm, I ran simulations with and without static families.

DQN

With static families

PER-DQN

With static families

D3QN

With static families

PER-D3QN

With static families
With static families

PPO

With static families

5. Documentation

Back to TOC

5.1. Training

The parameters for train.py:

Parameter Description Default value
brains Contains a list of brains defined as Agents by the ReinLife.Models folder.
n_episodes The number of epsiodes to run the training sequence. 10_000
width, height The width and height of the environment. 30, 30
visualize_results Whether to visualize the results interactively in matplotlib. False
google_colab If you want to visualize your results interactively in google_colab, also set this parameter to True as well as the one above. False
update_interval The interval at which average the results 500
print_results Whether to print the results to the console True
max_agents The maximum number of agents can occupy the environment. 100
render Whether to render the environment in pygame whilst training. False
static_families Whether you want a set number of families to be used. Each family has its own brain defined by the models in the variable brains. False
training Whether you want to train using the settings above or simply show the result. True
limit_reproduction If False, agents can reproduce indefinitely. If True, all agents can only reproduce once. False
incentivize_killing Whether to incentivize killing by adding 0.2 everytime an agent kills another True

5.2. Testing

The parameters for test.py:

Parameter Description Default value
brains Contains a list of brains defined as Agents by the ReinLife.Models folder.
width, height The width and height of the environment. 30, 30
pastel_colors Whether to automatically generate random pastel colors False
max_agents The maximum number of agents can occupy the environment. 100
static_families Whether you want a set number of families to be used. Each family has its own brain defined by the models in the variable brains. False
limit_reproduction If False, agents can reproduce indefinitely. If True, all agents can only reproduce once. False
fps Frames per second 10

Other work

ReinLife was based on:

  • Abrantes, J. P., Abrantes, A. J., & Oliehoek, F. A. (2020). Mimicking Evolution with Reinforcement Learning. arXiv preprint arXiv:2004.00048.
Owner
Maarten Grootendorst
Data Scientist | Psychologist
Maarten Grootendorst
Mosaic of Object-centric Images as Scene-centric Images (MosaicOS) for long-tailed object detection and instance segmentation.

MosaicOS Mosaic of Object-centric Images as Scene-centric Images (MosaicOS) for long-tailed object detection and instance segmentation. Introduction M

Cheng Zhang 27 Oct 12, 2022
FaceQgen: Semi-Supervised Deep Learning for Face Image Quality Assessment

FaceQgen FaceQgen: Semi-Supervised Deep Learning for Face Image Quality Assessment This repository is based on the paper: "FaceQgen: Semi-Supervised D

Javier Hernandez-Ortega 3 Aug 04, 2022
Repository aimed at compiling code, papers, demos etc.. related to my PhD on 3D vision and machine learning for fruit detection and shape estimation at the university of Lincoln

PhD_3DPerception Repository aimed at compiling code, papers, demos etc.. related to my PhD on 3D vision and machine learning for fruit detection and s

lelouedec 2 Oct 06, 2022
LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice,

LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and eval

Ahmet Erdem 691 Dec 23, 2022
Pytorch and Keras Implementations of Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects.

The repository contains the implementations for Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects. Model

Ankur Deria 115 Jan 06, 2023
Heterogeneous Temporal Graph Neural Network

Heterogeneous Temporal Graph Neural Network This repository contains the datasets and source code of HTGNN. run_mag.ipynb is the training and testing

15 Dec 22, 2022
学习 python3 以来写的一些垃圾玩具……

和东哥做兄弟 Author: chiupam 版权 未经本人同意,仓库内所有资源文件,禁止任何公众号、自媒体、开发者进行任何形式的转载、发布、搬运。 声明 这不是一个开源项目,只是把 GitHub 当作一个代码的存储空间,本项目不接受任何开源要求。 仅用于学习研究,禁止用于商业用途,不能保证其合法性

Chiupam 67 Mar 26, 2022
PyTorch implementation for OCT-GAN Neural ODE-based Conditional Tabular GANs (WWW 2021)

OCT-GAN: Neural ODE-based Conditional Tabular GANs (OCT-GAN) Code for reproducing the experiments in the paper: Jayoung Kim*, Jinsung Jeon*, Jaehoon L

BigDyL 7 Dec 27, 2022
This is a simple face recognition mini project that was completed by a team of 3 members in 1 week's time

PeekingDuckling 1. Description This is an implementation of facial identification algorithm to detect and identify the faces of the 3 team members Cla

Eric Kwok 2 Jan 25, 2022
Augmented CLIP - Training simple models to predict CLIP image embeddings from text embeddings, and vice versa.

Train aug_clip against laion400m-embeddings found here: https://laion.ai/laion-400-open-dataset/ - note that this used the base ViT-B/32 CLIP model. S

Peter Baylies 55 Sep 13, 2022
Repository of best practices for deep learning in Julia, inspired by fastai

FastAI Docs: Stable | Dev FastAI.jl is inspired by fastai, and is a repository of best practices for deep learning in Julia. Its goal is to easily ena

FluxML 532 Jan 02, 2023
Distributed DataLoader For Pytorch Based On Ray

Dpex——用户无感知分布式数据预处理组件 一、前言 随着GPU与CPU的算力差距越来越大以及模型训练时的预处理Pipeline变得越来越复杂,CPU部分的数据预处理已经逐渐成为了模型训练的瓶颈所在,这导致单机的GPU配置的提升并不能带来期望的线性加速。预处理性能瓶颈的本质在于每个GPU能够使用的C

Dalong 23 Nov 02, 2022
A Python Package for Portfolio Optimization using the Critical Line Algorithm

PyCLA A Python Package for Portfolio Optimization using the Critical Line Algorithm Getting started To use PyCLA, clone the repo and install the requi

19 Oct 11, 2022
ICML 21 - Voice2Series: Reprogramming Acoustic Models for Time Series Classification

Voice2Series-Reprogramming Voice2Series: Reprogramming Acoustic Models for Time Series Classification International Conference on Machine Learning (IC

49 Jan 03, 2023
Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow

Perceiver This Python package implements Perceiver: General Perception with Iterative Attention by Andrew Jaegle in TensorFlow. This model builds on t

Rishit Dagli 84 Oct 15, 2022
공공장소에서 눈만 돌리면 CCTV가 보인다는 말이 과언이 아닐 정도로 CCTV가 우리 생활에 깊숙이 자리 잡았습니다.

ObsCare_Main 소개 공공장소에서 눈만 돌리면 CCTV가 보인다는 말이 과언이 아닐 정도로 CCTV가 우리 생활에 깊숙이 자리 잡았습니다. CCTV의 대수가 급격히 늘어나면서 관리와 효율성 문제와 더불어, 곳곳에 설치된 CCTV를 개별 관제하는 것으로는 응급 상

5 Jul 07, 2022
《Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching》(CVPR 2020)

This contains the codes for cross-view geo-localization method described in: Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching, CVPR2020.

41 Oct 27, 2022
Pytorch codes for "Self-supervised Multi-view Stereo via Effective Co-Segmentation and Data-Augmentation"

Self-Supervised-MVS This repository is the official PyTorch implementation of our AAAI 2021 paper: "Self-supervised Multi-view Stereo via Effective Co

hongbin_xu 127 Jan 04, 2023
TensorFlow implementation of Barlow Twins (Barlow Twins: Self-Supervised Learning via Redundancy Reduction)

Barlow-Twins-TF This repository implements Barlow Twins (Barlow Twins: Self-Supervised Learning via Redundancy Reduction) in TensorFlow and demonstrat

Sayak Paul 36 Sep 14, 2022
A simple log parser and summariser for IIS web server logs

IISLogFileParser A basic parser tool for IIS Logs which summarises findings from the log file. Inspired by the Gist https://gist.github.com/wh13371/e7

2 Mar 26, 2022