A platform for intelligent agent learning based on a 3D open-world FPS game developed by Inspir.AI.

Overview

Wilderness Scavenger: 3D Open-World FPS Game AI Challenge

This is a platform for intelligent agent learning based on a 3D open-world FPS game developed by Inspir.AI.

Change Log

  • 2022-05-16: improved engine backend (Linux) with better stability (v1.0)
    • Check out Supported Platforms for download links.
    • Make sure to update to the latest version of the engine if you would like to use depth map or enemy state features.
  • 2022-05-18: updated engine backend for Windows and MacOS (v1.0)

Competition Overview

With a focus on learning intelligent agents in open-world games, this year we are hosting a new contest called Wilderness Scavenger. In this new game, which features a Battle Royale-style 3D open-world gameplay experience and a random PCG-based world generation, participants must learn agents that can perform subtasks common to FPS games, such as navigation, scouting, and skirmishing. To win the competition, agents must have strong perception of complex 3D environments and then learn to exploit various environmental structures (such as terrain, buildings, and plants) by developing flexible strategies to gain advantages over other competitors. Despite the difficulty of this goal, we hope that this new competition can serve as a cornerstone of research in AI-based gaming for open-world games.

Features

  • A light-weight 3D open-world FPS game developed with Unity3D game engine
  • Rendering-off game acceleration for fast training and evaluation
  • Large open world environment providing high freedom of agent behaviors
  • Highly customizable game configuration with random supply distribution and dynamic refresh
  • PCG-based map generation with randomly spawned buildings, plants and obstacles (100 training maps)
  • Interactive replay tool for game record visualization

Basic Structures

We developed this repository to provide a training and evaluation platform for the researchers interested in open-world FPS game AI. For getting started quickly, a typical workspace structure when using this repository can be summarized as follows:

.
├── examples  # providing starter code examples and training baselines
│   ├── envs/...
│   ├── basic.py
│   ├── basic_track1_navigation.py
│   ├── basic_track2_supply_gather.py
│   ├── basic_track3_supply_battle.py
│   ├── baseline_track1_navigation.py
│   ├── baseline_track2_supply_gather.py
│   └── baseline_track3_supply_battle.py
├── inspirai_fps  # the game play API source code
│   ├── lib/...
│   ├── __init__.py
│   ├── gamecore.py
│   ├── raycast_manager.py
│   ├── simple_command_pb2.py
│   ├── simple_command_pb2_grpc.py
│   └── utils.py
└── fps_linux  # the engine backend (Linux)
    ├── UnityPlayer.so
    ├── fps.x86_64
    ├── fps_Data/...
    └── logs/...
  • fps_linux (requires to be manually downloaded and unzipped to your working directory): the (Linux) engine backend extracted from our game development project, containing all the game related assets, binaries and source codes.
  • inspirai_fps: the python gameplay API for agent training and testing, providing the core Game class and other useful tool classes and functions.
  • examples: we provide basic starter codes for each game mode targeting each track of the challenge, and we also give out our implementation of some baseline solutions based on ray.rllib reinforcement learning framework.

Supported Platforms

We support the multiple platforms with different engine backends, including:

Installation (from source)

To use the game play API, you need to first install the package inspirai_fps by following the commands below:

git clone https://github.com/inspirai/wilderness-scavenger
cd wilderness-scavenger
pip install .

We recommend installing this package with python 3.8 (which is our development environment), so you may first create a virtual env using conda and finish installation:

$ conda create -n WildScav python=3.8
$ conda activate WildScav
(WildScav) $ pip install .

Installation (from PyPI)

Note: this may not be maintained in time. We strongly recommend using the installation method above

Alternatively, you can install the package from PyPI directly. But note that this will only install the gameplay API inspirai_fps, not the backend engine. So you still need to manually download the correct engine backend from the Supported Platfroms section.

pip install inspirai-fps

Loading Engine Backend

To successfully run the game, you need to make sure the game engine backend for your platform is downloaded and set the engine_dir parameter of the Game init function correctly. For example, here is a code snippet in the script example/basic.py:

from inspirai_fps import Game, ActionVariable
...
parser.add_argument("--engine-dir", type=str, default="../fps_linux")
...
game = Game(..., engine_dir=args.engine_dir, ...)

Loading Map Data

To get access to some features like realtime depth map computation or randomized player spawning, you need to load the map data and load them into the Game. After this, once you turn on the depth map rendering, the game server will automatically compute a depth map viewing from the player's first person perspective at each time step.

  1. Download map data from Google Drive or Feishu and decompress the downloaded file to your preferred directory (e.g., <WORKDIR>/map_data).
  2. Set map_dir parameter of the Game initializer accordingly
  3. Set the map_id as you like
  4. Turn on the function of depth map computation
  5. Turn on random start location to spawn agents at random places

Read the following code snippet in the script examples/basic.py as an example:

from inspirai_fps import Game, ActionVariable
...
parser.add_argument("--map-id", type=int, default=1)
parser.add_argument("--use-depth-map", action="store_true")
parser.add_argument("--random-start-location", action="store_true")
parser.add_argument("--map-dir", type=str, default="../map_data")
...
game = Game(map_dir=args.map_dir, ...)
game.set_map_id(args.map_id)  # this will load the valid locations of the specified map
...
if args.use_depth_map:
    game.turn_on_depth_map()
    game.set_depth_map_size(380, 220, 200)  # width (pixels), height (pixels), depth_limit (meters)
...
if args.random_start_location:
    for agent_id in range(args.num_agents):
        game.random_start_location(agent_id, indoor=False)  # this will randomly spawn the player at a valid outdoor location, or indoor location if indoor is True
...
game.new_episode()  # start a new episode, this will load the mesh of the specified map

Gameplay Visualization

We have also developed a replay visualization tool based on the Unity3D game engine. It is similar to the spectator mode common in multiplayer FPS games, which allows users to interactively follow the gameplay. Users can view an agent's action from different perspectives and also switch between multiple agents or different viewing modes (e.g., first person, third person, free) to see the entire game in a more immersive way. Participants can download the tool for their specific platforms here:

To use this tool, follow the instruction below:

  • Decompress the downloaded file to anywhere you prefer.
  • Turn on recording function with game.turn_on_record(). One record file will be saved at the end of each episode.

Find the replay files under the engine directory according to your platform:

  • Linux: <engine_dir>/fps_Data/StreamingAssets/Replay
  • Windows: <engine_dir>\FPSGameUnity_Data\StreamingAssets\Replay
  • MacOS: <engine_dir>/Contents/Resources/Data/StreamingAssets/Replay

Copy replay files you want to the replay tool directory according to your platform and start the replay tool.

For Windows users:

  • Copy the replay file (e.g. xxx.bin) into <replayer_dir>/FPSGameUnity_Data/StreamingAssets/Replay
  • Run FPSGameUnity.exe to start the application.

For MacOS users:

  • Copy the replay file (e.g. xxx.bin) into <replayer_dir>/Contents/Resources/Data/StreamingAssets/Replay
  • Run fps.app to start the application.

In the replay tool, you can:

  • Select the record you want to watch from the drop-down menu and click PLAY to start playing the record.
  • During the replay, users can make the following operations
    • Press Tab: pause or resume
    • Press E: switch observation mode (between first person, third person, free)
    • Press Q: switch between multiple agents
    • Press ECS: stop replay and return to the main menu
BirdCLEF 2021 - Birdcall Identification 4th place solution

BirdCLEF 2021 - Birdcall Identification 4th place solution My solution detail kaggle discussion Inference Notebook (best submission) Environment Use K

tattaka 42 Jan 02, 2023
Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path

Keyhole Imaging Code & Dataset Code associated with the paper "Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Singl

Stanford Computational Imaging Lab 20 Feb 03, 2022
CPU inference engine that delivers unprecedented performance for sparse models

The DeepSparse Engine is a CPU runtime that delivers unprecedented performance by taking advantage of natural sparsity within neural networks to reduce compute required as well as accelerate memory b

Neural Magic 1.2k Jan 09, 2023
MonoScene: Monocular 3D Semantic Scene Completion

MonoScene: Monocular 3D Semantic Scene Completion MonoScene: Monocular 3D Semantic Scene Completion] [arXiv + supp] | [Project page] Anh-Quan Cao, Rao

298 Jan 08, 2023
《Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching》(CVPR 2020)

This contains the codes for cross-view geo-localization method described in: Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching, CVPR2020.

41 Oct 27, 2022
Codes of paper "Unseen Object Amodal Instance Segmentation via Hierarchical Occlusion Modeling"

Unseen Object Amodal Instance Segmentation (UOAIS) Seunghyeok Back, Joosoon Lee, Taewon Kim, Sangjun Noh, Raeyoung Kang, Seongho Bak, Kyoobin Lee This

GIST-AILAB 92 Dec 13, 2022
An Unpaired Sketch-to-Photo Translation Model

Unpaired-Sketch-to-Photo-Translation We have released our code at https://github.com/rt219/Unsupervised-Sketch-to-Photo-Synthesis This project is the

38 Oct 28, 2022
In this project, we create and implement a deep learning library from scratch.

ARA In this project, we create and implement a deep learning library from scratch. Table of Contents Deep Leaning Library Table of Contents About The

22 Aug 23, 2022
TorchMD-Net provides state-of-the-art graph neural networks and equivariant transformer neural networks potentials for learning molecular potentials

TorchMD-net TorchMD-Net provides state-of-the-art graph neural networks and equivariant transformer neural networks potentials for learning molecular

TorchMD 104 Jan 03, 2023
Official PyTorch implementation of paper: Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation (ICCV 2021 Oral Presentation)

SML (ICCV 2021, Oral) : Official Pytorch Implementation This repository provides the official PyTorch implementation of the following paper: Standardi

SangHun 61 Dec 27, 2022
Official implement of "CAT: Cross Attention in Vision Transformer".

CAT: Cross Attention in Vision Transformer This is official implement of "CAT: Cross Attention in Vision Transformer". Abstract Since Transformer has

100 Dec 15, 2022
A PyTorch implementation of "DGC-Net: Dense Geometric Correspondence Network"

DGC-Net: Dense Geometric Correspondence Network This is a PyTorch implementation of our work "DGC-Net: Dense Geometric Correspondence Network" TL;DR A

191 Dec 16, 2022
Gradient representations in ReLU networks as similarity functions

Gradient representations in ReLU networks as similarity functions by Dániel Rácz and Bálint Daróczy. This repo contains the python code related to our

1 Oct 08, 2021
なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモ

FaceDetection-Anti-Spoof-Demo なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモです。 モデルはPINTO_model_zoo/191_anti-spoof-mn3からONNX形式のモデルを使用しています。 Requirement mediapipe

KazuhitoTakahashi 8 Nov 18, 2022
This repository contains the segmentation user interface from the OpenSurfaces project, extracted as a lightweight tool

OpenSurfaces Segmentation UI This repository contains the segmentation user interface from the OpenSurfaces project, extracted as a lightweight tool.

Sean Bell 66 Jul 11, 2022
PyTorch implementation of the end-to-end coreference resolution model with different higher-order inference methods.

End-to-End Coreference Resolution with Different Higher-Order Inference Methods This repository contains the implementation of the paper: Revealing th

Liyan 52 Jan 04, 2023
Official PyTorch implementation of the paper Image-Based CLIP-Guided Essence Transfer.

TargetCLIP- official pytorch implementation of the paper Image-Based CLIP-Guided Essence Transfer This repository finds a global direction in StyleGAN

Hila Chefer 221 Dec 13, 2022
VoxHRNet - Whole Brain Segmentation with Full Volume Neural Network

VoxHRNet This is the official implementation of the following paper: Whole Brain Segmentation with Full Volume Neural Network Yeshu Li, Jonathan Cui,

Microsoft 12 Nov 24, 2022
PointPillars inference with TensorRT

A project demonstrating how to use CUDA-PointPillars to deal with cloud points data from lidar.

NVIDIA AI IOT 315 Dec 31, 2022
Official Implementation of DE-DETR and DELA-DETR in "Towards Data-Efficient Detection Transformers"

DE-DETRs By Wen Wang, Jing Zhang, Yang Cao, Yongliang Shen, and Dacheng Tao This repository is an official implementation of DE-DETR and DELA-DETR in

Wen Wang 61 Dec 12, 2022