TrackTech: Real-time tracking of subjects and objects on multiple cameras

Overview

TrackTech: Real-time tracking of subjects and objects on multiple cameras

Forwarder Build Interface Build Orchestrator Build Processor Build

Forwarder Docker Pulls Interface Docker Pulls Orchestrator Docker Pulls Processor Docker Pulls

Codecov

DOI

This project is part of the 2021 spring bachelor final project of the Bachelor of Computer Science at Utrecht University. The team that worked on the project consists of eleven students from the Bachelor of Computer Science and Bachelor of Game Technology. This project has been done for educational purposes. All code is open-source, and proper credit is given to respective parties.

GPU support

Updating/Installing drivers

Update the GPU drivers and restart the system for changes to take effect. Optionally, use a different driver listed after running ubuntu-drivers devices

sudo apt install nvidia-driver-460
sudo reboot

Installing the container toolkit

Add the distribution, update the package manager, install NVIDIA for Docker, and restart Docker for changes to take effect. For more information, look at the install guide

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
   && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt update
sudo apt install -y nvidia-docker2
sudo systemctl restart docker

Acquire the GPU ID

According to this read the GPU UUID like GPU-a1b2c3d (just the first part) from

nvidia-smi -a

Add the resource

Add the GPU UUID from the last step to the Docker engine configuration file typically at /etc/docker/daemon.json. Create the file if it does not exist yet.

{
  "runtimes": {
    "nvidia": {
      "path": "/usr/bin/nvidia-container-runtime",
      "runtimeArgs": []
    }
  },
  "default-runtime": "nvidia",
  "node-generic-resources": ["gpu=GPU-a1b2c3d"]
}

Pylint

We use Pylint for python code quality assurance.

Installation

Input following command terminal:

pip install pylint

Run

To run linting on the entire repository, run the following command from the root: pylint CameraProcessor docs Interface ProcessorOrchestrator utility VideoForwarder --rcfile=.pylintrc --reports=n

Explanation

pylint --rcfile=.pylintrc --reports=n

is the Python module to run.

--rcfile is the linting specification used by Pylint.

--reports sets whether the full report should be displayed or not. Our recommendation would be n since this only displays linting errors/warnings and the eventual score.

Constraints

Pylint needs an __init__.py file in the subsystem root to parse all folders to lint. This run must be a subsystem since the root does not contain an __init__.py file.

Ignoring folders from linting

Some folders should be excluded from linting. The exclusion could be for multiple reasons like, the symlinked algorithms in the CameraProcessor folder or the Python virtual environment folder. Add folder name to ignore= in .pylintrc.

Comments
  • FFT: Spc 414 reconnect when stream suddenly stops

    FFT: Spc 414 reconnect when stream suddenly stops

    It works, but there are a lot of but's.

    If the forwarder comes back online and the processor reconnects and starts sending boxes again before the interface reloads, the sync should be fine

    If the forwarder comes back and the interface reloads before the processor starts sending boxes again, sync seems inconsistent. Sometimes it's fine, other times there is a small desync. Desync can be fixed fairly reliably by manually pausing the stream for a few seconds. I think this could be fixed with the 'hack' that makes the video jump a little bit after loading. But this was removed earlier because the jump was considered annoying and not a good fix.

    If the forwarder is not back yet when the interface reloads, there is a chance of ending up with a videojs error and will require a full page reload to fix.

    TL:DR: As long as the processor is sending boxes before the interface reloads sync should remain acceptable

    IMO it's at least better than nothing.

    opened by BrianVanB 2
  • FFT: Remove camera id from the configs.ini

    FFT: Remove camera id from the configs.ini

    Camera id should not be in configurations of cameraprocessor since it is required to be specified inside environment Otherwise it is possible to mistakenly start up camera processor without being guaranteed to have thought about the ID

    opened by GerardvSchie 1
  • Spc 801 pylint enforce class and file name equal

    Spc 801 pylint enforce class and file name equal

    class ITracker requires the file name: i_tracker.py class Tracker requires the file name: tracker.py

    Stricter linting implemented and impacted files are renamed according to the enforced standards

    opened by GerardvSchie 1
  • SPC-728 implement reidentification as a scheduler component

    SPC-728 implement reidentification as a scheduler component

    Extended scheduler to also use globals (objects that do not change during a scheduling iteration (one graph traversal).

    Allow multiple inputs to initial node (initially only 1, but 1 was required).

    Re-id stage and frame buffer (which used output of re-id stage) added to scheduler.

    Only schedules the start node if it is immediately ready, this may or may not be favourable and has the following consequences:

    • Only nodes connected to start node are executed, but only one start node is allowed.
    • If a node is included in plan but only via globals it will not get executed
    opened by tim-van-kemenade 1
  • FIX: SPC 662 fix warning when stream buffers

    FIX: SPC 662 fix warning when stream buffers

    If a stream buffers on first play it would spam the console with a warning saying something is undefined. Fixed by adding more checks if everything is defined before accessing.

    opened by BrianVanB 1
Releases(v1.0.0)
  • v1.0.0(Jun 29, 2021)

    Release v1.0.0

    The following release note will contain a brief overview of each component and its features. Underneath, The currently known bugs can be found.

    Features

    Processor

    The Camera Processor handles the core processing, using detection, tracking, and re-identification algorithms on an image or video feed. It can swap algorithms via new implementations of subclasses of the relevant superclass. Currently implemented are YOLOv5 and YOLOR for detection, SORT for tracking, and TorchReid and FastReid for re-identification.

    Multiple input methods

    The processor processes OpenCV frames. It can process any source that can be turned into a sequence of frames. The supported sources are implemented via a capture interface. The available captures are HLS, video stream, webcam, and an image folder. HLS is how a video feed is received via the internet, which performs extra work to add proper timestamps to the feed.

    Plug and play for main pipeline components

    The main pipeline contains a detection, tracking, and re-identification phase. All these phases are implemented and adhere to the interface belonging to the phase. Implementing another algorithm that conforms to this interface would allow for the algorithm to be loaded in via the configuration. This way, many different algorithms can be defined and swapped when needed.

    Scheduler

    Create a node structure representing a graph, and the scheduler will handle the scheduling of all nodes in each graph iteration. This prevents rewriting things like the pipeline for a more significant change in the form of the pipeline. These graphs are called plans, and thus multiple self-contained plans can be created that can also be swapped on-premise.

    Multiple output methods, deploy opencv tornado

    The processor has three output methods: deploy, opencv, and tornado. Deploy sends information about the processed frame to the orchestrator, which sends it to other processors or the interface. OpenCV displays the processed frames in an OpenCV window. Tornado displays the same OpenCV output but does so in a dedicated webpage. It is discouraged to use the tornado mode for anything other than development since it takes a heavy toll on performance.

    Training of algorithms

    Both the detection and the re-identifications algorithms can be trained with custom datasets. Instructions on how to train these individual components can be found here. The tracking is not based on a neural-network-based implementation and can therefore not be trained.

    Accuracy measurement and metrics

    Several metrics were implemented for determining the accuracy of the detection, the tracking and the re-identification. The detection uses the Mean Average Precision metric. The tracking uses the MOT metric. The re-identification uses the Mean Average Precision and Rank-1 metrics. An extensive explanation of the used accuracy metrics can be found here

    Interface

    A tornado-based webpage interface is used to view the video feeds as well as the detected bounding boxes. It features automated syncing for different camera feeds and their bounding boxes. It has options to select classification types to detect and swap camera focus. The user can click on a bounding box to start tracking an object. The interface features a timeline that keeps track of when and for how long a subject has appeared on each camera for a clear overview.

    Automated bounding box syncing

    When the interface received bounding boxes from the orchestrator and a video stream from the forwarder, it will try to match each box to the frame it belongs to. This is done internally using frame ids. This prevents the user from manually setting the box/video delay to synchronize them.

    Timelines

    Timelines is a page where the history of all tracked objects can be found back. This can be useful to see where an object was during the time it was tracked. When an object is still being tracked, the cutout will be visible next to the object id.

    Forwarder

    Adaptive bitrate

    The forwarder can convert a single incoming stream (like RTMP or RTSP) to multiple bitrate output streams. This way, the stream bitrate can be adapted according to available bandwidth.

    Other

    Security

    OAuth2 is used to make sure only authorized people can access services they should be able to access. Using authentication is optional and can be ignored when developing or testing.

    Docker Images

    Each component contains a Dockerfile used to build images. These images are publicly available on Dockerhub. This allows for easy downloading and deployment.

    Known bugs

    Syncing

    The synchronization of the bounding boxes and the video stream on the interface sometimes mismatch, causing the bounding boxes to have an offset compared to the expected location. Sometimes this can be fixed by pausing the video for a few seconds, but not always.

    Authentication between processor and forwarder

    The OpenCV library to pull in the video from the forwarder does not allow any header to be added to the requests. This means that authentication needs to be disabled for local requests. Luckily most orchestration tools (like docker swarm) allow for a selective port opening to the outside. We allowed unauthenticated forwarder access over port 80 on HTTP (as auth should not be done over an unencrypted connection), which can be used by the processors.

    Processor does not properly handle memory paging on some computers

    This issue only occurred on one computer which had too little memory too handle the processor. The team could not reproduce the bug on other computers that had memory constraints. On this computer, the paging file size keeps increasing until there is no more disk space left, eventually resulting in a processor crash. The processors memory profile does not grow over time thus a system that has enough memory to run for 10 minutes should be able to run for 24 hours or longer. The only memory consumption increasing over time is the feature maps of tracked objects. But these vectors take up little space, and it is generally expected that there are not that many tracked objects.

    Source code(tar.gz)
    Source code(zip)
ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs

ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs This is the code of paper ConE: Cone Embeddings for Multi-Hop Reasoning over Knowl

MIRA Lab 33 Dec 07, 2022
Trainable PyTorch reproduction of AlphaFold 2

OpenFold A faithful PyTorch reproduction of DeepMind's AlphaFold 2. Features OpenFold carefully reproduces (almost) all of the features of the origina

AQ Laboratory 1.7k Dec 29, 2022
Code for "Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks", CVPR 2021

Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks This repository contains the code that accompanies our CVPR 20

Despoina Paschalidou 161 Dec 20, 2022
SimulLR - PyTorch Implementation of SimulLR

PyTorch Implementation of SimulLR There is an interesting work[1] about simultan

11 Dec 22, 2022
This is the code of NeurIPS'21 paper "Towards Enabling Meta-Learning from Target Models".

ST This is the code of NeurIPS 2021 paper "Towards Enabling Meta-Learning from Target Models". If you use any content of this repo for your work, plea

Su Lu 7 Dec 06, 2022
A general and strong 3D object detection codebase that supports more methods, datasets and tools (debugging, recording and analysis).

ALLINONE-Det ALLINONE-Det is a general and strong 3D object detection codebase built on OpenPCDet, which supports more methods, datasets and tools (de

Michael.CV 5 Nov 03, 2022
Object-aware Contrastive Learning for Debiased Scene Representation

Object-aware Contrastive Learning Official PyTorch implementation of "Object-aware Contrastive Learning for Debiased Scene Representation" by Sangwoo

43 Dec 14, 2022
Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Paper | Blog OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image gene

OFA Sys 1.4k Jan 08, 2023
Malware Env for OpenAI Gym

Malware Env for OpenAI Gym Citing If you use this code in a publication please cite the following paper: Hyrum S. Anderson, Anant Kharkar, Bobby Fila

ENDGAME 563 Dec 29, 2022
An improvement of FasterGICP: Acceptance-rejection Sampling based 3D Lidar Odometry

fasterGICP This package is an improvement of fast_gicp Please cite our paper if possible. W. Jikai, M. Xu, F. Farzin, D. Dai and Z. Chen, "FasterGICP:

79 Dec 31, 2022
Simultaneous Detection and Segmentation

Simultaneous Detection and Segmentation This is code for the ECCV Paper: Simultaneous Detection and Segmentation Bharath Hariharan, Pablo Arbelaez,

Bharath Hariharan 96 Jul 20, 2022
MVS2D: Efficient Multi-view Stereo via Attention-Driven 2D Convolutions

MVS2D: Efficient Multi-view Stereo via Attention-Driven 2D Convolutions Project Page | Paper If you find our work useful for your research, please con

96 Jan 04, 2023
NDE: Climate Modeling with Neural Diffusion Equation, ICDM'21

Climate Modeling with Neural Diffusion Equation Introduction This is the repository of our accepted ICDM 2021 paper "Climate Modeling with Neural Diff

Jeehyun Hwang 5 Dec 18, 2022
Cognition-aware Cognate Detection

Cognition-aware Cognate Detection The repository which contains our code for our EACL 2021 paper titled, "Cognition-aware Cognate Detection". This wor

Prashant K. Sharma 1 Feb 01, 2022
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Algo Phantoms 81 Nov 26, 2022
Full Transformer Framework for Robust Point Cloud Registration with Deep Information Interaction

Full Transformer Framework for Robust Point Cloud Registration with Deep Information Interaction. arxiv This repository contains python scripts for tr

12 Dec 12, 2022
PyTorch implementation of SmoothGrad: removing noise by adding noise.

SmoothGrad implementation in PyTorch PyTorch implementation of SmoothGrad: removing noise by adding noise. Vanilla Gradients SmoothGrad Guided backpro

SSKH 143 Jan 05, 2023
This repository contains the exercises and its solution contained in the book "An Introduction to Statistical Learning" in python.

An-Introduction-to-Statistical-Learning This repository contains the exercises and its solution contained in the book An Introduction to Statistical L

2.1k Jan 02, 2023
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 20.2k Jan 05, 2023
Docker containers of baseline agents for the Crafter environment

Crafter Baselines This repository contains Docker containers for running various baselines on the Crafter environment. Reward Agents DreamerV2 based o

Danijar Hafner 17 Sep 25, 2022