Measures input lag without dedicated hardware, performing motion detection on recorded or live video

Overview

What is InputLagTimer?

This tool can measure input lag by analyzing a video where both the game controller and the game screen can be seen on a webcam or a video file.

Here's how it looks in action:

Usage demo

Even though the typical usage is game latency, InputLagTimer can measure any latency so long as it's captured on video. For example, if you point a camera at both your car key and its door lock, you can measure how fast that remote unlocks your car.

How does it measure input lag?

You first mark two rectangles in the video you provide:

  • 🟦 Input rectangle (blue): where the input motion happens. Such as a gamepad stick.
  • 🟪 Output rectangle (purple): where the response will be visible. Such as the middle left of your TV screen, where the front wheels can be seen turning in your car simulator.

InputLagTimer will detect motion on the input area, and time how long it takes to detect motion on the output area.

Things should work for latencies of up to 700ms; if you need to measure slower events, the limit can be trivially edited in code.

How to use it:

  1. Download InputLagTimer (some windows binaries are available on github if you prefer that)
  2. Open InputLagTimer:
    • Plug your webcam then run the program.
    • Or drag-and-drop your video file to the program.
    • Or, from command line, type InputLagTimer 2 to open the 3rd webcam, or InputLagTimer file.mp4 to open a file.
  3. Press S then follow screen instructions to select the 🟦 input and 🟪 output rectangles.
  4. Observe the input and output motion bars at the top, and press 1/2 and 3/4 to adjust the motion detection thresholds (white indicator). Latency timing will start when the input motion passes the threshold, and stop when the output motion does.

Note: a .cfg file will be created for each video, allowing to reproduce the same latency analysis.

Tips and gotchas

  • Use a tripod to hold the camera. The InputLagTimer is based on motion detection, therefore hand-held footage is doomed to spam false positives.
  • Disable gamepad vibration and put the gamepad in a table (unless you want to measure vibration-latency!): in other words,reduce unnecessary motion from both the input and output rectangles.
  • Select the 🟦 input and 🟪 output rectangles as accurately as possible. E.g. to measure keyboard key travel time, draw an input rectangle including the entire key height. If you don't want to include key travel latency, draw the input rectangle as close to the key activation point as possible.
  • If using certain artificial lights, enable camera's anti-flicker feature when available (press C in InputLagTimer when using a webcam), or choose a recording framerate different than the powerline frequency used in your country (often 50Hz or 60Hz). This removes video flicker, vastly improving motion detection.
  • Prefer higher recording framerate, this provides finer-grained latency measurements:
    • Some phones and actioncams can reach hundreds of FPS.
    • Recording equipment may not reach its advertised framerate if it's not bright enough. If in doubt, add more lighting.
  • If your camera cannot reach the requested framerate (e.g. it only manages to capture 120FPS out of 240FPS, due to lack of light), consider recording directly at the reachable framerate. This eliminates the useless filler frames your camera was forced to duplicate, making it easier to tune the motion detection thresholds in InputLagTimer.
  • Prefer global shutter over rolling shutter cameras. Rolling shutter can slightly skew latency measurements, as one corner of the image is recorded earlier than the oposite corner.

Rolling Shutter example

(source: Axel1963 - CC BY-SA 3.0)

  • Screens normally refresh pixels from the top earlier than pixels from the bottom (or left before right, etc). The location of 🟦 input/ 🟪 output rectangles in a screen can slightly skew latency measurements.
  • The pixels on a screen can take longer or shorter to update, depending on:
    • Pixel color. E.g. white-to-black response time might be longer than black-to-white.
    • Panel type. E.g. OLED will normally be much quicker than LCD panels.
    • Screen configuration. E.g. enabling 'overdrive', enabling 'game mode', etc.
  • Press A (Advanced mode) to see more keys and additional information.

Advanced Mode screenshot

Dependencies

To run the EXE, you don't need anythig else. So move along, nothing to see in this section :)

To run the python code directly, you'll need opencv for python, numpy, and whichever python interpreter you prefer.

To build the binary (with compile.py), you'll need PyInstaller.

Credits and licenses

InputLagTimer software:

Copyright 2021 Bruno Gonzalez Campo | [email protected] | @stenyak

Distributed under MIT license (see license.txt)

InputLagTimer icon:

Copyright 2021 Bruno Gonzalez Campo | [email protected] | @stenyak

Distributed under CC BY 3.0 license (see license_icon.txt)

Icon derived from:

You might also like...
Python scripts for performing object detection with the 1000 labels of the ImageNet dataset in ONNX.
Python scripts for performing object detection with the 1000 labels of the ImageNet dataset in ONNX.

Python scripts for performing object detection with the 1000 labels of the ImageNet dataset in ONNX. The repository combines a class agnostic object localizer to first detect the objects in the image, and next a ResNet50 model trained on ImageNet is used to label each box.

Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONNX.
Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONNX.

ONNX-HybridNets-Multitask-Road-Detection Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONN

This project uses reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can learn to read tape. The project is dedicated to hero in life great Jesse Livermore.

Reinforcement-trading This project uses Reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can

NeuralCompression is a Python repository dedicated to research of neural networks that compress data

NeuralCompression is a Python repository dedicated to research of neural networks that compress data. The repository includes tools such as JAX-based entropy coders, image compression models, video compression models, and metrics for image and video evaluation.

SCAAML is a deep learning framwork dedicated to side-channel attacks run on top of TensorFlow 2.x.
SCAAML is a deep learning framwork dedicated to side-channel attacks run on top of TensorFlow 2.x.

SCAAML (Side Channel Attacks Assisted with Machine Learning) is a deep learning framwork dedicated to side-channel attacks. It is written in python and run on top of TensorFlow 2.x.

NeoPlay is the project dedicated to ESport events.

NeoPlay is the project dedicated to ESport events. On this platform users can participate in tournaments with prize pools as well as create their own tournaments.

This program was designed to detect whether someone is wearing a facemask through a live video stream.

This program was designed to detect whether someone is wearing a facemask through a live video stream. A custom lightweight CNN trained with TensorFlow on a public dataset provided by Kaggle is used to detect whether each face detected by the cv2 face detection dnn is wearing a mask

Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video
Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video

Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video

Official implementation of the network presented in the paper
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Releases(v1.2)
  • v1.2(Mar 29, 2022)

    • Display summary of measured latencies: min/avg/max latencies and a histogram
    • Added display with the current framerate
    • Fixed incorrect timing when a webcam dropped below the advertised framerate
    • The 'a' key will now cycle between varying amounts of detail (more detail can lead to lower framerates)
    • Add CC license links on readme
    • Minor cleanups here and there

    Full Changelog: https://github.com/stenyak/inputLagTimer/compare/v1.1...v1.2

    Source code(tar.gz)
    Source code(zip)
    InputLagTimer.exe(50.81 MB)
  • v1.1(Jan 8, 2022)

    • Fix safety timeout kicking in too soon if using a custom maxLatency
    • Fix first webcam being ignored when running the program without arguments
    • Rename compiled file from camelCase to CamelCase

    Full Changelog: https://github.com/stenyak/inputLagTimer/compare/v1.0...v1.1

    Source code(tar.gz)
    Source code(zip)
    InputLagTimer.exe(49.22 MB)
  • v1.0(Jan 8, 2022)

Auto-updating data to assist in investment to NEPSE

Symbol Ratios Summary Sector LTP Undervalued Bonus % MEGA Strong Commercial Banks 368 5 10 JBBL Strong Development Banks 568 5 10 SIFC Strong Finance

Amit Chaudhary 16 Nov 01, 2022
Adaptive FNO transformer - official Pytorch implementation

Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers This repository contains PyTorch implementation of the Adaptive Fourier Neu

NVIDIA Research Projects 77 Dec 29, 2022
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stock price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022
Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation Official PyTorch implementation for the paper Look

Rishabh Jangir 20 Nov 24, 2022
The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

Introduction This repository includes the source code for "Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks", which is pu

machen 11 Nov 27, 2022
codes for IKM (arXiv2021, Submitted to IEEE Trans)

Image-specific Convolutional Kernel Modulation for Single Image Super-resolution This repository is for IKM introduced in the following paper Yuanfei

Yuanfei Huang 9 Dec 29, 2022
Manage the availability of workspaces within Frappe/ ERPNext (sidebar) based on user-roles

Workspace Permissions Manage the availability of workspaces within Frappe/ ERPNext (sidebar) based on user-roles. Features Configure foreach workspace

Patrick.St. 18 Sep 26, 2022
This is an open source library implementing hyperbox-based machine learning algorithms

hyperbox-brain is a Python open source toolbox implementing hyperbox-based machine learning algorithms built on top of scikit-learn and is distributed

Complex Adaptive Systems (CAS) Lab - University of Technology Sydney 21 Dec 14, 2022
Code repository for Self-supervised Structure-sensitive Learning, CVPR'17

Self-supervised Structure-sensitive Learning (SSL) Ke Gong, Xiaodan Liang, Xiaohui Shen, Liang Lin, "Look into Person: Self-supervised Structure-sensi

Clay Gong 219 Dec 29, 2022
Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration

CoGAIL Table of Content Overview Installation Dataset Training Evaluation Trained Checkpoints Acknowledgement Citations License Overview This reposito

Jeremy Wang 29 Dec 24, 2022
PyTorch implementation for Convolutional Networks with Adaptive Inference Graphs

Convolutional Networks with Adaptive Inference Graphs (ConvNet-AIG) This repository contains a PyTorch implementation of the paper Convolutional Netwo

Andreas Veit 176 Dec 07, 2022
Convert ONNX model graph to Keras model format.

Convert ONNX model graph to Keras model format.

Grigory Malivenko 175 Dec 28, 2022
Using LSTM write Tang poetry

本教程将通过一个示例对LSTM进行介绍。通过搭建训练LSTM网络,我们将训练一个模型来生成唐诗。本文将对该实现进行详尽的解释,并阐明此模型的工作方式和原因。并不需要过多专业知识,但是可能需要新手花一些时间来理解的模型训练的实际情况。为了节省时间,请尽量选择GPU进行训练。

56 Dec 15, 2022
WTTE-RNN a framework for churn and time to event prediction

WTTE-RNN Weibull Time To Event Recurrent Neural Network A less hacky machine-learning framework for churn- and time to event prediction. Forecasting p

Egil Martinsson 727 Dec 28, 2022
Unadversarial Examples: Designing Objects for Robust Vision

Unadversarial Examples: Designing Objects for Robust Vision This repository contains the code necessary to replicate the major results of our paper: U

Microsoft 93 Nov 28, 2022
Unofficial Pytorch Implementation of WaveGrad2

WaveGrad 2 — Unofficial PyTorch Implementation WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis Unofficial PyTorch+Lightning Implementati

MINDs Lab 104 Nov 29, 2022
You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors

You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors In this paper, we propose a novel local descriptor-based fra

Haiping Wang 80 Dec 15, 2022
[CVPR2021] Look before you leap: learning landmark features for one-stage visual grounding.

LBYL-Net This repo implements paper Look Before You Leap: Learning Landmark Features For One-Stage Visual Grounding CVPR 2021. Getting Started Prerequ

SVIP Lab 45 Dec 12, 2022
PyTorch implementation of probabilistic deep forecast applied to air quality.

Probabilistic Deep Forecast PyTorch implementation of a paper, titled: Probabilistic Deep Learning to Quantify Uncertainty in Air Quality Forecasting

Abdulmajid Murad 13 Nov 16, 2022
A C implementation for creating 2D voronoi diagrams

Branch OSX/Linux Windows master dev jc_voronoi A fast C/C++ header only implementation for creating 2D Voronoi diagrams from a point set Uses Fortune'

Mathias Westerdahl 481 Dec 29, 2022