Narya API allows you track soccer player from camera inputs, and evaluate them with an Expected Discounted Goal (EDG) Agent

Related tags

Deep Learningnarya
Overview

Narya

The Narya API allows you track soccer player from camera inputs, and evaluate them with an Expected Discounted Goal (EDG) Agent. This repository contains the implementation of the following paper. We also make available all of our pretrained agents, and the datasets we used as well.

The goal of this repository is to allow anyone without any access to soccer data to produce its own and to analyse them with powerfull tools. We also hope that by releasing our training procedures and datasets, better models will emerge and make this tool better iteratively.

We also built 4 notebooks to explain how to use our models and a colab:

and released of blog post version of these notebooks here.

We tried to make everything easy to reuse, we hope anyone will be able to:

  • Use our datasets to train other models
  • Finetune some of our trained models
  • Use our trackers
  • Evaluate players with our EDG Agent
  • and much more

You can find at the bottom of the readme links to our models and datasets, but also to tools and models trained by the community.

Installation

You can either install narya from source:

git clone && cd narya && pip3 install -r requirements.txt

Google Football:

Google Football needs to be installed differently. Please see their repo to take care of it.

Google Football Repo

Player tracking:

The installed version is directly compatible with the player tracking models. However, it seems that some errors might occur with keras.load_model when the architecture of the model is contained in the .h5 file. In doubt, Python 3.7 is always working with our installation.

EDG:

As Google Football API is currently not supporting Tensorflow 2, you need to manually downgrade its version in order to use our EDG agent:

pip3 install tensorflow==1.13.1 pip3 install tensorflow_probability==0.5.0

Models & Datasets:

The models will be downloaded automatically with the library. If needed, they can be access at the end of the readme. The datasets are also available below.

Tracking Players Models:

Each model can be accessed on its own, or you can use the full tracking itself.

Single Model

Each pretrained model is built on the same architecture to allow for the easier utilisation possible: you import it, and you use it. The processing function, or different frameworks, are handled internaly.

Let's import an image:

import numpy as np
import cv2
image = cv2.imread('test_image.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

Now, let's create our models:

from narya.models.keras_models import DeepHomoModel
from narya.models.keras_models import KeypointDetectorModel
from narya.models.gluon_models import TrackerModel

direct_homography_model = DeepHomoModel()

keypoint_model = KeypointDetectorModel(
    backbone='efficientnetb3', num_classes=29, input_shape=(320, 320),
)

tracking_model = TrackerModel(pretrained=True, backbone='ssd_512_resnet50_v1_coco')

We can now directly make predictions:

homography_1 = direct_homography_model(image)
keypoints_masks = keypoint_model(image)
cid, score, bbox = tracking_model(image)

In the tracking class, we also process the homography we estimate with interpolation and filters. This ensure smooth estimation during the entire video.

Processing:

We can now vizualise or use each of this predictions. For example, visualize the predicted keypoints:

from narya.utils.vizualization import visualize
visualize(
        image=denormalize(image.squeeze()),
        pr_mask=keypoints_masks[..., -1].squeeze(),
    )

Full Tracker:

Given a list of images, one can easily apply our tracking algorithm:

from narya.tracker.full_tracker import FootballTracker

This tracker contains the 3 models seen above, and the tracking/ReIdentification model. You can create it by specifying your frame rate, and the size of the memory frames buffer:

tracker = FootballTracker(frame_rate=24.7,track_buffer = 60)

Given a list of image, the full tracking is computed using:

trajectories = tracker(img_list,split_size = 512, save_tracking_folder = 'test_tracking/',
                        template = template,skip_homo = None)

We also built post processing functions to handle the mistakes the tracker can make, and also visualization tools to plot the data.

EDG:

The best way to use our EDG agent is to first convert your tracking data to a google format, using the utils functions:

from narya.utils.google_football_utils import _save_data, _build_obs_stacked

data_google = _save_data(df,'test_temo.dump')
observations = {
    'frame_count':[],
    'obs':[],
    'obs_count':[],
    'value':[]
}
for i in range(len(data_google)):
    obs,obs_count = _build_obs_stacked(data_google,i)
    observations['frame_count'].append(i)
    observations['obs'].append(obs)
    observations['obs_count'].append(obs_count)

You can now easily load a pretrained agent, and use it to get the value of any action with:

from narya.analytics.edg_agent import AgentValue

agent = AgentValue(checkpoints = checkpoints)
value = agent.get_value([obs])

Processing:

You can use these values to plot the value of an action, or plot map of values at a given time. You can use:

map_value = agent.get_edg_map(observations['obs'][20],observations['obs_count'][20],79,57,entity = 'ball')

and

for indx,obs in enumerate(observations['obs']):
    value = agent.get_value([obs])
    observations['value'].append(value)
df_dict = {
    'frame_count':observations['frame_count'],
    'value':observations['value']
}
df_ = pd.DataFrame(df_dict)

to compute an EDG map and the EDG overtime of an action.

Open Source

Our goal with this project was to both build a powerful tool to analyse soccer plays. This led us to build a soccer player tracking model on top of it. We hope that by releasing our codes, weights, and datasets, more people will be able to perform amazing projects related to soccer/sport analysis.

If you find any bug, please open an issue. If you see any improvements, or trained a model you want to share, please open a pull request!

Thanks

A special thanks to Last Row, for providing some tracking data at the beginning, to try our agent, and to Soccermatics for providing Vizualisation tools (and some motivation to start this project).

Citation

If you use Narya in your research and would like to cite it, we suggest you use the following citation:

@misc{garnier2021evaluating,
      title={Evaluating Soccer Player: from Live Camera to Deep Reinforcement Learning}, 
      author={Paul Garnier and Théophane Gregoir},
      year={2021},
      eprint={2101.05388},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Links:

Links to the models and datasets from the original Paper

Model Description Link
11_vs_11_selfplay_last EDG agent https://storage.googleapis.com/narya-bucket-1/models/11_vs_11_selfplay_last
deep_homo_model.h5 Direct Homography estimation Weights https://storage.googleapis.com/narya-bucket-1/models/deep_homo_model.h5
deep_homo_model_1.h5 Direct Homography estimation Architecture https://storage.googleapis.com/narya-bucket-1/models/deep_homo_model_1.h5
keypoint_detector.h5 Keypoints detection Weights https://storage.googleapis.com/narya-bucket-1/models/keypoint_detector.h5
player_reid.pth Player Embedding Weights https://storage.googleapis.com/narya-bucket-1/models/player_reid.pth
player_tracker.params Player & Ball detection Weights https://storage.googleapis.com/narya-bucket-1/models/player_tracker.params

The datasets can be downloaded at:

Dataset Description Link
homography_dataset.zip Homography Dataset (image,homography) https://storage.googleapis.com/narya-bucket-1/dataset/homography_dataset.zip
keypoints_dataset.zip Keypoint Dataset (image,list of mask) https://storage.googleapis.com/narya-bucket-1/dataset/keypoints_dataset.zip
tracking_dataset.zip Tracking Dataset in VOC format (image, bounding boxes for players/ball) https://storage.googleapis.com/narya-bucket-1/dataset/tracking_dataset.zip

Links to models trained by the community

Experimental data for vertical pitches:

Model Description Link
vertical_HomographyModel_0.0001_32.h5 Direct Homography estimation Weights https://storage.googleapis.com/narya-bucket-1/models/vertical_HomographyModel_0.0001_32.h5
vertical_FPN_efficientnetb3_0.0001_32.h5 Keypoints detection Weights https://storage.googleapis.com/narya-bucket-1/models/vertical_FPN_efficientnetb3_0.0001_32.h5
Dataset Description Link
vertical_samples_direct_homography.zip Homography Dataset (image,homography) https://storage.googleapis.com/narya-bucket-1/dataset/vertical_samples_direct_homography.zip
vertical_samples_keypoints.zip Keypoint Dataset (image,list of mask) https://storage.googleapis.com/narya-bucket-1/dataset/vertical_samples_keypoints.zip

Tools

Tool for efficient creation of training labels:

Tool built by @larsmaurath to label football images: https://github.com/larsmaurath/narya-label-creator

Tool for creation of keypoints datasets:

Tool built by @kkoripl to create keypoints datasets - xml files and images resizing: https://github.com/kkoripl/NaryaKeyPointsDatasetCreator

Owner
Paul Garnier
Currently building flaneer.com at day Sport analytics at night
Paul Garnier
Generative Autoregressive, Normalized Flows, VAEs, Score-based models (GANVAS)

GANVAS-models This is an implementation of various generative models. It contains implementations of the following: Autoregressive Models: PixelCNN, G

MRSAIL (Mini Robotics, Software & AI Lab) 6 Nov 26, 2022
Official code of Team Yao at Multi-Modal-Fact-Verification-2022

Official code of Team Yao at Multi-Modal-Fact-Verification-2022 A Multi-Modal Fact Verification dataset released as part of the De-Factify workshop in

Wei-Yao Wang 11 Nov 15, 2022
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

130 Dec 25, 2022
A curated list of the latest breakthroughs in AI (in 2021) by release date with a clear video explanation, link to a more in-depth article, and code.

2021: A Year Full of Amazing AI papers- A Review 📌 A curated list of the latest breakthroughs in AI by release date with a clear video explanation, l

Louis-François Bouchard 2.9k Dec 31, 2022
Repository of Vision Transformer with Deformable Attention

Vision Transformer with Deformable Attention This repository contains the code for the paper Vision Transformer with Deformable Attention [arXiv]. Int

410 Jan 03, 2023
A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes iGibson is a simulation environment providing fast visual rend

Stanford Vision and Learning Lab 493 Jan 04, 2023
Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"

Output Diversified Sampling (ODS) This is the github repository for the NeurIPS 2020 paper "Diversity can be Transferred: Output Diversification for W

50 Dec 11, 2022
HAT: Hierarchical Aggregation Transformers for Person Re-identification

HAT: Hierarchical Aggregation Transformers for Person Re-identification

11 Sep 05, 2022
Implementation of UNet on the Joey ML framework

Independent Research Project - Code Joey can be cloned from here https://github.com/devitocodes/joey/. Devito and other dependencies such as PyTorch a

Navjot Kukreja 1 Oct 21, 2021
Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Ali Aliev 15.3k Jan 05, 2023
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

101 Nov 25, 2022
VLGrammar: Grounded Grammar Induction of Vision and Language

VLGrammar: Grounded Grammar Induction of Vision and Language

Yining Hong 27 Dec 23, 2022
SweiNet is an uncertainty-quantifying shear wave speed (SWS) estimator for ultrasound shear wave elasticity (SWE) imaging.

SweiNet SweiNet is an uncertainty-quantifying shear wave speed (SWS) estimator for ultrasound shear wave elasticity (SWE) imaging. SweiNet takes as in

Felix Jin 3 Mar 31, 2022
Fewshot-face-translation-GAN - Generative adversarial networks integrating modules from FUNIT and SPADE for face-swapping.

Few-shot face translation A GAN based approach for one model to swap them all. The table below shows our priliminary face-swapping results requiring o

768 Dec 24, 2022
iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis

iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis Andreas Bl

CompVis Heidelberg 36 Dec 25, 2022
Pytorch implementation of COIN, a framework for compression with implicit neural representations 🌸

COIN 🌟 This repo contains a Pytorch implementation of COIN: COmpression with Implicit Neural representations, including code to reproduce all experim

Emilien Dupont 104 Dec 14, 2022
Learning Optical Flow from a Few Matches (CVPR 2021)

Learning Optical Flow from a Few Matches This repository contains the source code for our paper: Learning Optical Flow from a Few Matches CVPR 2021 Sh

Shihao Jiang (Zac) 159 Dec 16, 2022
KAPAO is an efficient multi-person human pose estimation model that detects keypoints and poses as objects and fuses the detections to predict human poses.

KAPAO (Keypoints and Poses as Objects) KAPAO is an efficient single-stage multi-person human pose estimation model that models keypoints and poses as

Will McNally 664 Dec 30, 2022
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.

Homepage | Paper | Datasets | Leaderboard | Documentation Graph Robustness Benchmark (GRB) provides scalable, unified, modular, and reproducible evalu

THUDM 66 Dec 22, 2022
Computer Vision Script to recognize first person motion, developed as final project for the course "Machine Learning and Deep Learning"

Overview of The Code BaseColab/MLDL_FPAR.pdf: it contains the full explanation of our work Base Colab: it contains the base colab used to perform all

Simone Papicchio 4 Jul 16, 2022