A platform for intelligent agent learning based on a 3D open-world FPS game developed by Inspir.AI.

Overview

Wilderness Scavenger: 3D Open-World FPS Game AI Challenge

This is a platform for intelligent agent learning based on a 3D open-world FPS game developed by Inspir.AI.

Change Log

  • 2022-05-16: improved engine backend (Linux) with better stability (v1.0)
    • Check out Supported Platforms for download links.
    • Make sure to update to the latest version of the engine if you would like to use depth map or enemy state features.
  • 2022-05-18: updated engine backend for Windows and MacOS (v1.0)

Competition Overview

With a focus on learning intelligent agents in open-world games, this year we are hosting a new contest called Wilderness Scavenger. In this new game, which features a Battle Royale-style 3D open-world gameplay experience and a random PCG-based world generation, participants must learn agents that can perform subtasks common to FPS games, such as navigation, scouting, and skirmishing. To win the competition, agents must have strong perception of complex 3D environments and then learn to exploit various environmental structures (such as terrain, buildings, and plants) by developing flexible strategies to gain advantages over other competitors. Despite the difficulty of this goal, we hope that this new competition can serve as a cornerstone of research in AI-based gaming for open-world games.

Features

  • A light-weight 3D open-world FPS game developed with Unity3D game engine
  • Rendering-off game acceleration for fast training and evaluation
  • Large open world environment providing high freedom of agent behaviors
  • Highly customizable game configuration with random supply distribution and dynamic refresh
  • PCG-based map generation with randomly spawned buildings, plants and obstacles (100 training maps)
  • Interactive replay tool for game record visualization

Basic Structures

We developed this repository to provide a training and evaluation platform for the researchers interested in open-world FPS game AI. For getting started quickly, a typical workspace structure when using this repository can be summarized as follows:

.
├── examples  # providing starter code examples and training baselines
│   ├── envs/...
│   ├── basic.py
│   ├── basic_track1_navigation.py
│   ├── basic_track2_supply_gather.py
│   ├── basic_track3_supply_battle.py
│   ├── baseline_track1_navigation.py
│   ├── baseline_track2_supply_gather.py
│   └── baseline_track3_supply_battle.py
├── inspirai_fps  # the game play API source code
│   ├── lib/...
│   ├── __init__.py
│   ├── gamecore.py
│   ├── raycast_manager.py
│   ├── simple_command_pb2.py
│   ├── simple_command_pb2_grpc.py
│   └── utils.py
└── fps_linux  # the engine backend (Linux)
    ├── UnityPlayer.so
    ├── fps.x86_64
    ├── fps_Data/...
    └── logs/...
  • fps_linux (requires to be manually downloaded and unzipped to your working directory): the (Linux) engine backend extracted from our game development project, containing all the game related assets, binaries and source codes.
  • inspirai_fps: the python gameplay API for agent training and testing, providing the core Game class and other useful tool classes and functions.
  • examples: we provide basic starter codes for each game mode targeting each track of the challenge, and we also give out our implementation of some baseline solutions based on ray.rllib reinforcement learning framework.

Supported Platforms

We support the multiple platforms with different engine backends, including:

Installation (from source)

To use the game play API, you need to first install the package inspirai_fps by following the commands below:

git clone https://github.com/inspirai/wilderness-scavenger
cd wilderness-scavenger
pip install .

We recommend installing this package with python 3.8 (which is our development environment), so you may first create a virtual env using conda and finish installation:

$ conda create -n WildScav python=3.8
$ conda activate WildScav
(WildScav) $ pip install .

Installation (from PyPI)

Note: this may not be maintained in time. We strongly recommend using the installation method above

Alternatively, you can install the package from PyPI directly. But note that this will only install the gameplay API inspirai_fps, not the backend engine. So you still need to manually download the correct engine backend from the Supported Platfroms section.

pip install inspirai-fps

Loading Engine Backend

To successfully run the game, you need to make sure the game engine backend for your platform is downloaded and set the engine_dir parameter of the Game init function correctly. For example, here is a code snippet in the script example/basic.py:

from inspirai_fps import Game, ActionVariable
...
parser.add_argument("--engine-dir", type=str, default="../fps_linux")
...
game = Game(..., engine_dir=args.engine_dir, ...)

Loading Map Data

To get access to some features like realtime depth map computation or randomized player spawning, you need to load the map data and load them into the Game. After this, once you turn on the depth map rendering, the game server will automatically compute a depth map viewing from the player's first person perspective at each time step.

  1. Download map data from Google Drive or Feishu and decompress the downloaded file to your preferred directory (e.g., <WORKDIR>/map_data).
  2. Set map_dir parameter of the Game initializer accordingly
  3. Set the map_id as you like
  4. Turn on the function of depth map computation
  5. Turn on random start location to spawn agents at random places

Read the following code snippet in the script examples/basic.py as an example:

from inspirai_fps import Game, ActionVariable
...
parser.add_argument("--map-id", type=int, default=1)
parser.add_argument("--use-depth-map", action="store_true")
parser.add_argument("--random-start-location", action="store_true")
parser.add_argument("--map-dir", type=str, default="../map_data")
...
game = Game(map_dir=args.map_dir, ...)
game.set_map_id(args.map_id)  # this will load the valid locations of the specified map
...
if args.use_depth_map:
    game.turn_on_depth_map()
    game.set_depth_map_size(380, 220, 200)  # width (pixels), height (pixels), depth_limit (meters)
...
if args.random_start_location:
    for agent_id in range(args.num_agents):
        game.random_start_location(agent_id, indoor=False)  # this will randomly spawn the player at a valid outdoor location, or indoor location if indoor is True
...
game.new_episode()  # start a new episode, this will load the mesh of the specified map

Gameplay Visualization

We have also developed a replay visualization tool based on the Unity3D game engine. It is similar to the spectator mode common in multiplayer FPS games, which allows users to interactively follow the gameplay. Users can view an agent's action from different perspectives and also switch between multiple agents or different viewing modes (e.g., first person, third person, free) to see the entire game in a more immersive way. Participants can download the tool for their specific platforms here:

To use this tool, follow the instruction below:

  • Decompress the downloaded file to anywhere you prefer.
  • Turn on recording function with game.turn_on_record(). One record file will be saved at the end of each episode.

Find the replay files under the engine directory according to your platform:

  • Linux: <engine_dir>/fps_Data/StreamingAssets/Replay
  • Windows: <engine_dir>\FPSGameUnity_Data\StreamingAssets\Replay
  • MacOS: <engine_dir>/Contents/Resources/Data/StreamingAssets/Replay

Copy replay files you want to the replay tool directory according to your platform and start the replay tool.

For Windows users:

  • Copy the replay file (e.g. xxx.bin) into <replayer_dir>/FPSGameUnity_Data/StreamingAssets/Replay
  • Run FPSGameUnity.exe to start the application.

For MacOS users:

  • Copy the replay file (e.g. xxx.bin) into <replayer_dir>/Contents/Resources/Data/StreamingAssets/Replay
  • Run fps.app to start the application.

In the replay tool, you can:

  • Select the record you want to watch from the drop-down menu and click PLAY to start playing the record.
  • During the replay, users can make the following operations
    • Press Tab: pause or resume
    • Press E: switch observation mode (between first person, third person, free)
    • Press Q: switch between multiple agents
    • Press ECS: stop replay and return to the main menu
Official NumPy Implementation of Deep Networks from the Principle of Rate Reduction (2021)

Deep Networks from the Principle of Rate Reduction This repository is the official NumPy implementation of the paper Deep Networks from the Principle

Ryan Chan 49 Dec 16, 2022
[EMNLP 2021] Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training

RoSTER The source code used for Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training, p

Yu Meng 60 Dec 30, 2022
PyTorch Implementation of Realtime Multi-Person Pose Estimation project.

PyTorch Realtime Multi-Person Pose Estimation This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is here Realtime_Multi-P

Dave Fang 157 Nov 12, 2022
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search

BossNAS This repository contains PyTorch evaluation code, retraining code and pretrained models of our paper: BossNAS: Exploring Hybrid CNN-transforme

Changlin Li 127 Dec 26, 2022
😮The official implementation of "CoNeRF: Controllable Neural Radiance Fields" 😮

CoNeRF: Controllable Neural Radiance Fields This is the official implementation for "CoNeRF: Controllable Neural Radiance Fields" Project Page Paper V

Kacper Kania 61 Dec 24, 2022
Single Red Blood Cell Hydrodynamic Traps Via the Generative Design

Rbc-traps-generative-design - The generative design for single red clood cell hydrodynamic traps using GEFEST framework

Natural Systems Simulation Lab 4 Jun 16, 2022
Single-Shot Motion Completion with Transformer

Single-Shot Motion Completion with Transformer 👉 [Preprint] 👈 Abstract Motion completion is a challenging and long-discussed problem, which is of gr

FuxiCV 78 Dec 29, 2022
A Deep Learning Based Knowledge Extraction Toolkit for Knowledge Base Population

DeepKE is a knowledge extraction toolkit supporting low-resource and document-level scenarios for entity, relation and attribute extraction. We provide comprehensive documents, Google Colab tutorials

ZJUNLP 1.6k Jan 05, 2023
CrossMLP - The repository offers the official implementation of our BMVC 2021 paper (oral) in PyTorch.

CrossMLP Cascaded Cross MLP-Mixer GANs for Cross-View Image Translation Bin Ren1, Hao Tang2, Nicu Sebe1. 1University of Trento, Italy, 2ETH, Switzerla

Bingoren 16 Jul 27, 2022
Hi Guys, here I am providing examples, which will help you in Lerarning Python

LearningPython Hi guys, here I am trying to include as many practice examples of Python Language, as i Myself learn, and hope these will help you in t

4 Feb 03, 2022
YOLTv5 rapidly detects objects in arbitrarily large aerial or satellite images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks

YOLTv5 rapidly detects objects in arbitrarily large aerial or satellite images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks.

Adam Van Etten 145 Jan 01, 2023
Code for the Image similarity challenge.

ISC 2021 This repository contains code for the Image Similarity Challenge 2021. Getting started The docs subdirectory has step-by-step instructions on

Facebook Research 173 Dec 12, 2022
Code for Robust Contrastive Learning against Noisy Views

Robust Contrastive Learning against Noisy Views This repository provides a PyTorch implementation of the Robust InfoNCE loss proposed in paper Robust

Ching-Yao Chuang 53 Jan 08, 2023
Data and Code for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning"

Introduction Code and data for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning". We cons

Pan Lu 81 Dec 27, 2022
WeakVRD-Captioning - Implementation of paper Improving Image Captioning with Better Use of Caption

WeakVRD-Captioning - Implementation of paper Improving Image Captioning with Better Use of Caption

30 Oct 28, 2022
Python scripts form performing stereo depth estimation using the HITNET model in Tensorflow Lite.

TFLite-HITNET-Stereo-depth-estimation Python scripts form performing stereo depth estimation using the HITNET model in Tensorflow Lite. Stereo depth e

Ibai Gorordo 22 Oct 20, 2022
Official implementation of the paper DeFlow: Learning Complex Image Degradations from Unpaired Data with Conditional Flows

DeFlow: Learning Complex Image Degradations from Unpaired Data with Conditional Flows Official implementation of the paper DeFlow: Learning Complex Im

Valentin Wolf 86 Nov 16, 2022
Jittor implementation of Recursive-NeRF: An Efficient and Dynamically Growing NeRF

Recursive-NeRF: An Efficient and Dynamically Growing NeRF This is a Jittor implementation of Recursive-NeRF: An Efficient and Dynamically Growing NeRF

33 Nov 30, 2022
PyTorch implementation of Value Iteration Networks (VIN): Clean, Simple and Modular. Visualization in Visdom.

VIN: Value Iteration Networks This is an implementation of Value Iteration Networks (VIN) in PyTorch to reproduce the results.(TensorFlow version) Key

Xingdong Zuo 215 Dec 07, 2022
InsightFace: 2D and 3D Face Analysis Project on MXNet and PyTorch

InsightFace: 2D and 3D Face Analysis Project on MXNet and PyTorch

Deep Insight 13.2k Jan 06, 2023