AAI supports interdisciplinary research to help better understand human, animal, and artificial cognition.

Overview

AnimalAI 3

AAI supports interdisciplinary research to help better understand human, animal, and artificial cognition. It aims to support AI research towards unlocking cognitive capabilities and better understanding the space of possible minds. It is designed to facilitate testing across animals, humans, and AI.

This Repo

This repo contains the AnimalAI environment, some introductory python scripts for interacting with it, as well as the 900 tasks which were used in the original Animal-AI Olympics competition (and some others for demonstration purposes). Details of the tasks can be found on the AAI website where they can also be played and competition entries watched.

The environment is built using Unity ml-agents release 2.1.0-exp.1 (python version 0.27.0).

The AnimalAI environment and packages are currently only tested on linux (Ubuntu 20.04.2 LTS) with python 3.8 but have been reported working with python 3.6+, other linux distros and Windows and Mac.

The Unity Project for the environment is available here.

Installing

To get started you will need to:

  1. Clone this repo.
  2. Install the animalai python package and requirements by running pip install -e animalai from the root folder.
  3. Download the environment for your system:
OS Environment link
Linux v3.0
Mac v3.0
Windows v3.0

(Old v2.x versions can be found here)

Unzip the entire content of the archive to the (initially empty) env folder. On linux you may have to make the file executable by running chmod +x env/AnimalAI.x86_64. Note that the env folder should contain the AnimalAI.exe/.x86_84/.app depending on your system and any other folders in the same directory in the zip file.

Tutorials and Examples

Some example scripts to get started can be found in the examples folder. The following docs provide information for some common uses of the environment.

Manual Control

If you launch the environment directly from the executable or through the play.py script it will launch in player mode. Here you can control the agent with the following:

Keyboard Key Action
W move agent forwards
S move agent backwards
A turn agent left
D turn agent right
C switch camera
R reset environment

Citing

If you use the Animal-AI environment in your work you can cite the environment paper:

Crosby, M., Beyret, B., Shanahan, M., Hernández-Orallo, J., Cheke, L. & Halina, M.. (2020). The Animal-AI Testbed and Competition. Proceedings of the NeurIPS 2019 Competition and Demonstration Track, in Proceedings of Machine Learning Research 123:164-176 Available here.

 @InProceedings{pmlr-v123-crosby20a, 
    title = {The Animal-AI Testbed and Competition}, 
    author = {Crosby, Matthew and Beyret, Benjamin and Shanahan, Murray and Hern\'{a}ndez-Orallo, Jos\'{e} and Cheke, Lucy and Halina, Marta}, 
    booktitle = {Proceedings of the NeurIPS 2019 Competition and Demonstration Track}, 
    pages = {164--176}, 
    year = {2020}, 
    editor = {Hugo Jair Escalante and Raia Hadsell}, 
    volume = {123}, 
    series = {Proceedings of Machine Learning Research}, 
    month = {08--14 Dec}, 
    publisher = {PMLR}, 
} 

Unity ML-Agents

The Animal-AI Olympics was built using Unity's ML-Agents Toolkit.

Juliani, A., Berges, V., Vckay, E., Gao, Y., Henry, H., Mattar, M., Lange, D. (2018). Unity: A General Platform for Intelligent Agents. arXiv preprint arXiv:1809.02627

Further the documentation for mlagents should be consulted if you want to make any changes.

Version History

  • v3.0 Note that due to the changes to controls and graphics agents trained on previous versions might not preform the same
    • Updated agent handling. The agent now comes to a stop more quickly when not moving forwards or backwards and accelerates slightly faster.
    • Added new objects, spawners, signs, goal types (see doc)
    • Added 3 animal skins to the player character.
    • Updated graphics for many objects. Default shading on many previously plain objects make it easier to determine location(s)/velocity.
    • Many improvements to documentation and examples.
    • Upgraded to Mlagents 2.1.0-exp.1 (ml-agents python version 0.27.0)
    • Fixed various bugs.
  • v2.2.3
    • Now you can specify multiple different arenas in a single yml config file ant the environment will cycle through them each time it resets
  • v2.2.2
    • Low quality version with improved fps. (will work on further improvments to graphics & fps later)
  • v2.2.1
    • Improve UI scaling wrt. screen size
    • Fixed an issue with cardbox objects spawning at the wrong sizes
    • Fixed an issue where the environment would time out after the time period even when health > 0 (no longer intended behaviour)
    • Improved Death Zone shader for weird Zone sizes
  • v2.2.0 Health and Basic Scripts
    • Switched to health-based system (rewards remain the same).
    • Updated overlay in play mode.
    • Allow 3D hot zones and death zones and make them 3D by default in old configs.
    • Added rewards that grow/decay (currently not configurable but will be added in next update).
    • Added basic Gym Wrapper.
    • Added basic heuristic agent for benchmarking and testing.
    • Improved all other python scripts.
    • Fixed a reset environment bug when resetting during training.
    • Added the ability to set the DecisionPeriod (frameskip) when instantiating and environment.
  • v2.1.1 bugfix
    • Fixed raycast length being less then diagonal length of standard arena
  • v2.1 beta release
    • Upgraded to ML-Agents release 2 (0.26.0)
    • New features
      • Added raycast observations
      • Added agent global position to observations
Owner
Matthew Crosby
Matthew Crosby
Code + pre-trained models for the paper Keeping Your Eye on the Ball Trajectory Attention in Video Transformers

Motionformer This is an official pytorch implementation of paper Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers. In this rep

Facebook Research 192 Dec 23, 2022
A JAX-based research framework for writing differentiable numerical simulators with arbitrary discretizations

jaxdf - JAX-based Discretization Framework Overview | Example | Installation | Documentation ⚠️ This library is still in development. Breaking changes

UCL Biomedical Ultrasound Group 65 Dec 23, 2022
Code Release for the paper "TriBERT: Full-body Human-centric Audio-visual Representation Learning for Visual Sound Separation"

TriBERT This repository contains the code for the NeurIPS 2021 paper titled "TriBERT: Full-body Human-centric Audio-visual Representation Learning for

UBC Computer Vision Group 8 Aug 31, 2022
Pytorch GUI(demo) for iVOS(interactive VOS) and GIS (Guided iVOS)

GUI for iVOS(interactive VOS) and GIS (Guided iVOS) GUI Implementation of CVPR2021 paper "Guided Interactive Video Object Segmentation Using Reliabili

Yuk Heo 13 Dec 09, 2022
A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking

PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking PoseRBPF Paper Self-supervision Paper Pose Estimation Video Robot Manipulati

NVIDIA Research Projects 107 Dec 25, 2022
A simple implementation of Kalman filter in single object tracking

kalman-filter-in-single-object-tracking A simple implementation of Kalman filter in single object tracking https://www.bilibili.com/video/BV1Qf4y1J7D4

130 Dec 26, 2022
data/code repository of "C2F-FWN: Coarse-to-Fine Flow Warping Network for Spatial-Temporal Consistent Motion Transfer"

C2F-FWN data/code repository of "C2F-FWN: Coarse-to-Fine Flow Warping Network for Spatial-Temporal Consistent Motion Transfer" (https://arxiv.org/abs/

EKILI 46 Dec 14, 2022
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21)

EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21) Citation If y

addisonwang 18 Nov 11, 2022
This repo is a PyTorch implementation for Paper "Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds"

Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds This repository is a PyTorch implementation for paper: Uns

Kaizhi Yang 42 Dec 09, 2022
Keras like implementation of Deep Learning architectures from scratch using numpy.

Mini-Keras Keras like implementation of Deep Learning architectures from scratch using numpy. How to contribute? The project contains implementations

MANU S PILLAI 5 Oct 10, 2021
code for our BMVC 2021 paper "HCV: Hierarchy-Consistency Verification for Incremental Implicitly-Refined Classification"

HCV_IIRC code for our BMVC 2021 paper HCV: Hierarchy-Consistency Verification for Incremental Implicitly-Refined Classification by Kai Wang, Xialei Li

kai wang 13 Oct 03, 2022
Convert Mission Planner (ArduCopter) Waypoint Missions to Litchi CSV Format to execute on DJI Drones

Mission Planner to Litchi Convert Mission Planner (ArduCopter) Waypoint Surveys to Litchi CSV Format to execute on DJI Drones Litchi doesn't support S

Yaros 24 Dec 09, 2022
Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

1 Oct 11, 2021
Code for "Modeling Indirect Illumination for Inverse Rendering", CVPR 2022

Modeling Indirect Illumination for Inverse Rendering Project Page | Paper | Data Preparation Set up the python environment conda create -n invrender p

ZJU3DV 116 Jan 03, 2023
Contains modeling practice materials and homework for the Computational Neuroscience course at Okinawa Institute of Science and Technology

A310 Computational Neuroscience - Okinawa Institute of Science and Technology, 2022 This repository contains modeling practice materials and homework

Sungho Hong 1 Jan 24, 2022
The Body Part Regression (BPR) model translates the anatomy in a radiologic volume into a machine-interpretable form.

Copyright © German Cancer Research Center (DKFZ), Division of Medical Image Computing (MIC). Please make sure that your usage of this code is in compl

MIC-DKFZ 40 Dec 18, 2022
A new codebase for Group Activity Recognition. It contains codes for ICCV 2021 paper: Spatio-Temporal Dynamic Inference Network for Group Activity Recognition and some other methods.

Spatio-Temporal Dynamic Inference Network for Group Activity Recognition The source codes for ICCV2021 Paper: Spatio-Temporal Dynamic Inference Networ

40 Dec 12, 2022
TransVTSpotter: End-to-end Video Text Spotter with Transformer

TransVTSpotter: End-to-end Video Text Spotter with Transformer Introduction A Multilingual, Open World Video Text Dataset and End-to-end Video Text Sp

weijiawu 66 Dec 26, 2022
A CROSS-MODAL FUSION NETWORK BASED ON SELF-ATTENTION AND RESIDUAL STRUCTURE FOR MULTIMODAL EMOTION RECOGNITION

CFN-SR A CROSS-MODAL FUSION NETWORK BASED ON SELF-ATTENTION AND RESIDUAL STRUCTURE FOR MULTIMODAL EMOTION RECOGNITION The audio-video based multimodal

skeleton 15 Sep 26, 2022
A collection of Google research projects related to Federated Learning and Federated Analytics.

Federated Research Federated Research is a collection of research projects related to Federated Learning and Federated Analytics. Federated learning i

Google Research 483 Jan 05, 2023