Python package for multiple object tracking research with focus on laboratory animals tracking.

Related tags

Deep Learningmotutils
Overview

Build Status

motutils is a Python package for multiple object tracking research with focus on laboratory animals tracking.

Features

  • loads:
  • saves: MOTChallenge CSV
  • Mot, BboxMot and PoseMot classes backed by xarray dataset with frame and id coordinates
  • export to Pandas DataFrame
  • oracle detector: fake all knowing detector based on ground truth with configurable inaccuracies
  • different classes of tracked objects: point, bounding box, pose
  • interpolation of missing positions
  • find mapping between MOT results and ground truth
  • visualization:
    • tracked positions / objects overlaid on a video
    • montage of multiple videos with results and/or ground truth
  • cli
    • visualization
    • evaluation ()
    • mot format conversion

visualization montage

Video comparison of multiple tracking methods and the ground truth.

Installation

pip install git+https://github.com/smidm/motutils

Usage

$ motutils --help
Usage: motutils [OPTIONS] COMMAND [ARGS]...

Options:
--load-mot FILENAME             load a MOT challenge csv file(s)
--load-gt FILENAME              load ground truth from a MOT challenge csv
file
--load-idtracker FILENAME       load IdTracker trajectories (e.g.,
trajectories.txt)
--load-idtrackerai FILENAME     load idtracker.ai trajectories (e.g.,
trajectories_wo_gaps.npy)
--load-sleap-analysis FILENAME  load SLEAP analysis trajectories (exported
from sleap-label File -> Export Analysis
HDF5)
--load-toxtrac FILENAME         load ToxTracker trajectories (e.g.,
Tracking_0.txt)
--toxtrac-topleft-xy 
   
    ...
position of the arena top left corner, see
first tuple in the Arena line in Stats_1.txt
--help                          Show this message and exit.

Commands:
convert    Convert any format to MOT Challenge format.
eval       Evaluate a single MOT file against the ground truth.
visualize  Visualize MOT file(s) overlaid on a video.

   
$ motutils convert --help

Usage: motutils convert [OPTIONS] OUTPUT_MOT

  Convert any format to MOT Challenge format.

$ motutils eval --help

Usage: motutils eval [OPTIONS]

  Evaluate a single MOT file against the ground truth.

Options:
  --write-eval FILENAME  write evaluation results as a CSV file
  --keypoint INTEGER     keypoint to use when evaluating pose MOT results
                         against point ground truth
$ motutils visualize --help

Usage: motutils visualize [OPTIONS] VIDEO_IN VIDEO_OUT
                          [SOURCE_DISPLAY_NAME]...

  Visualize MOT file(s) overlaid on a video.

Options:
  --limit-duration INTEGER  visualization duration limit in s
  --help                    Show this message and exit.

Python API Quickstart

>> mot.ds Dimensions: (frame: 4500, id: 5) Coordinates: * frame (frame) int64 0 1 2 3 4 5 6 ... 4494 4495 4496 4497 4498 4499 * id (id) int64 1 2 3 4 5 Data variables: x (frame, id) float64 434.5 277.7 179.2 ... 185.3 138.6 420.2 y (frame, id) float64 279.0 293.6 407.9 ... 393.3 387.2 294.7 width (frame, id) float64 nan nan nan nan nan ... nan nan nan nan nan height (frame, id) float64 nan nan nan nan nan ... nan nan nan nan nan confidence (frame, id) float64 1.0 1.0 1.0 1.0 1.0 ... 1.0 1.0 1.0 1.0 1.0 >>> mot.num_ids() 5 >>> mot.count_missing() 0 >>> mot.get_object(frame=1, obj_id=2) Dimensions: () Coordinates: frame int64 1 id int64 2 Data variables: x float64 278.2 y float64 293.7 width float64 nan height float64 nan confidence float64 1.0 >>> mot.match_xy(frame=1, xy=(300, 300), maximal_match_distance=40) Dimensions: () Coordinates: frame int64 1 id int64 2 Data variables: x float64 278.2 y float64 293.7 width float64 nan height float64 nan confidence float64 1.0 >>> mot.to_dataframe() frame id x y width height confidence 0 1 1 434.5 279.0 -1.0 -1.0 1.0 1 1 2 277.7 293.6 -1.0 -1.0 1.0 2 1 3 179.2 407.9 -1.0 -1.0 1.0 3 1 4 180.0 430.0 -1.0 -1.0 1.0 4 1 5 155.0 397.0 -1.0 -1.0 1.0 ... .. ... ... ... ... ... 22495 4500 1 90.3 341.9 -1.0 -1.0 1.0 22496 4500 2 187.9 431.9 -1.0 -1.0 1.0 22497 4500 3 185.3 393.3 -1.0 -1.0 1.0 22498 4500 4 138.6 387.2 -1.0 -1.0 1.0 22499 4500 5 420.2 294.7 -1.0 -1.0 1.0 [22500 rows x 7 columns]">
>>> from motutils import Mot
>>> mot = Mot("tests/data/Sowbug3_cut.csv")

>>> mot.ds
<xarray.Dataset>
Dimensions:     (frame: 4500, id: 5)
Coordinates:
  * frame       (frame) int64 0 1 2 3 4 5 6 ... 4494 4495 4496 4497 4498 4499
  * id          (id) int64 1 2 3 4 5
Data variables:
    x           (frame, id) float64 434.5 277.7 179.2 ... 185.3 138.6 420.2
    y           (frame, id) float64 279.0 293.6 407.9 ... 393.3 387.2 294.7
    width       (frame, id) float64 nan nan nan nan nan ... nan nan nan nan nan
    height      (frame, id) float64 nan nan nan nan nan ... nan nan nan nan nan
    confidence  (frame, id) float64 1.0 1.0 1.0 1.0 1.0 ... 1.0 1.0 1.0 1.0 1.0

>>> mot.num_ids()
5

>>> mot.count_missing()
0

>>> mot.get_object(frame=1, obj_id=2)
<xarray.Dataset>
Dimensions:     ()
Coordinates:
    frame       int64 1
    id          int64 2
Data variables:
    x           float64 278.2
    y           float64 293.7
    width       float64 nan
    height      float64 nan
    confidence  float64 1.0

>>> mot.match_xy(frame=1, xy=(300, 300), maximal_match_distance=40)
<xarray.Dataset>
Dimensions:     ()
Coordinates:
    frame       int64 1
    id          int64 2
Data variables:
    x           float64 278.2
    y           float64 293.7
    width       float64 nan
    height      float64 nan
    confidence  float64 1.0

>>> mot.to_dataframe()
       frame  id      x      y  width  height  confidence
0          1   1  434.5  279.0   -1.0    -1.0         1.0
1          1   2  277.7  293.6   -1.0    -1.0         1.0
2          1   3  179.2  407.9   -1.0    -1.0         1.0
3          1   4  180.0  430.0   -1.0    -1.0         1.0
4          1   5  155.0  397.0   -1.0    -1.0         1.0
      ...  ..    ...    ...    ...     ...         ...
22495   4500   1   90.3  341.9   -1.0    -1.0         1.0
22496   4500   2  187.9  431.9   -1.0    -1.0         1.0
22497   4500   3  185.3  393.3   -1.0    -1.0         1.0
22498   4500   4  138.6  387.2   -1.0    -1.0         1.0
22499   4500   5  420.2  294.7   -1.0    -1.0         1.0
[22500 rows x 7 columns]

Documentation

See the quickstart and tests for now.

Write me if you would like to use the package, but the lack of documentation is hindering you. You can easily reorder my priorities on this simply just by letting me know that there is an interest.

Owner
Matěj Šmíd
Matěj Šmíd
Adversarial Reweighting for Partial Domain Adaptation

Adversarial Reweighting for Partial Domain Adaptation Code for paper "Xiang Gu, Xi Yu, Yan Yang, Jian Sun, Zongben Xu, Adversarial Reweighting for Par

12 Dec 01, 2022
StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system

StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system, initially used for researching optimal incentive parameters for Liquidations 2.0.

Blockchain at Berkeley 52 Nov 21, 2022
A pytorch implementation of MBNET: MOS PREDICTION FOR SYNTHESIZED SPEECH WITH MEAN-BIAS NETWORK

Pytorch-MBNet A pytorch implementation of MBNET: MOS PREDICTION FOR SYNTHESIZED SPEECH WITH MEAN-BIAS NETWORK Training To train a new model, please ru

46 Dec 28, 2022
Deep Sketch-guided Cartoon Video Inbetweening

Cartoon Video Inbetweening Paper | DOI | Video The source code of Deep Sketch-guided Cartoon Video Inbetweening by Xiaoyu Li, Bo Zhang, Jing Liao, Ped

Xiaoyu Li 37 Dec 22, 2022
Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021)

Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021) This repository is for BAAF-Net introduce

90 Dec 29, 2022
TCTrack: Temporal Contexts for Aerial Tracking (CVPR2022)

TCTrack: Temporal Contexts for Aerial Tracking (CVPR2022) Ziang Cao and Ziyuan Huang and Liang Pan and Shiwei Zhang and Ziwei Liu and Changhong Fu In

Intelligent Vision for Robotics in Complex Environment 100 Dec 19, 2022
Main Results on ImageNet with Pretrained Models

This repository contains Pytorch evaluation code, training code and pretrained models for the following projects: SPACH (A Battle of Network Structure

Microsoft 151 Dec 14, 2022
CVPR 2022 "Online Convolutional Re-parameterization"

OREPA: Online Convolutional Re-parameterization This repo is the PyTorch implementation of our paper to appear in CVPR2022 on "Online Convolutional Re

Mu Hu 121 Dec 21, 2022
ICCV2021 Paper: AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection

ICCV2021 Paper: AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection

Zongdai 107 Dec 20, 2022
A library for Deep Learning Implementations and utils

deeply A Deep Learning library Table of Contents Features Quick Start Usage License Features Python 2.7+ and Python 3.4+ compatible. Quick Start $ pip

Achilles Rasquinha 1 Dec 12, 2022
Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021) Jiaxi Jiang, Kai Zhang, Radu Timofte Computer Vision Lab, ETH Zurich, Switzerland 🔥

Jiaxi Jiang 282 Jan 02, 2023
[ICCV 2021] Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain

Amplitude-Phase Recombination (ICCV'21) Official PyTorch implementation of "Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neur

Guangyao Chen 53 Oct 05, 2022
Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors

Make-A-Scene - PyTorch Pytorch implementation (inofficial) of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors (https://arxiv.org/

Casual GAN Papers 259 Dec 28, 2022
State of the Art Neural Networks for Deep Learning

pyradox This python library helps you with implementing various state of the art neural networks in a totally customizable fashion using Tensorflow 2

Ritvik Rastogi 60 May 29, 2022
Benchmark for Answering Existential First Order Queries with Single Free Variable

EFO-1-QA Benchmark for First Order Query Estimation on Knowledge Graphs This repository contains an entire pipeline for the EFO-1-QA benchmark. EFO-1

HKUST-KnowComp 14 Oct 24, 2022
Racing line optimization algorithm in python that uses Particle Swarm Optimization.

Racing Line Optimization with PSO This repository contains a racing line optimization algorithm in python that uses Particle Swarm Optimization. Requi

Parsa Dahesh 6 Dec 14, 2022
[AAAI22] Reliable Propagation-Correction Modulation for Video Object Segmentation

Reliable Propagation-Correction Modulation for Video Object Segmentation (AAAI22) Preview version paper of this work is available at: https://arxiv.or

Xiaohao Xu 70 Dec 04, 2022
Finite Element Analysis

FElupe - Finite Element Analysis FElupe is a Python 3.6+ finite element analysis package focussing on the formulation and numerical solution of nonlin

Andreas D. 20 Jan 09, 2023
Pytorch implementation of the paper Time-series Generative Adversarial Networks

TimeGAN-pytorch Pytorch implementation of the paper Time-series Generative Adversarial Networks presented at NeurIPS'19. Jinsung Yoon, Daniel Jarrett

Zhiwei ZHANG 21 Nov 24, 2022
Code for our NeurIPS 2021 paper: Sparsely Changing Latent States for Prediction and Planning in Partially Observable Domains

GateL0RD This is a lightweight PyTorch implementation of GateL0RD, our RNN presented in "Sparsely Changing Latent States for Prediction and Planning i

Autonomous Learning Group 16 Nov 03, 2022