Virtual hand gesture mouse using a webcam

Overview

NonMouse

GitHub GitHub release (latest by date) zenn

日本語のREADMEはこちら

This is an application that allows you to use your hand itself as a mouse.
The program uses a web camera to recognize your hand and control the mouse cursor.

The video is available on Youtube

スクリーンショット 2021-09-13 午後5 33 21

スクリーンショット 2021-09-13 午後5 33 21

Feature

  • Entirely new mouse: a mouse with a software approach. It recognizes your hand and works as a mouse.
  • NonMouse can be invoked by the global hotkey even when this application is inactive.
  • Works well with typing
  • You can make it look like a touch display, by pointing the web camera at the display.
  • You can use the mouse wherever you want.
  • Just download from the latest release (windows, mac only)

Installation

📁 Run as executable file

Download the zip file that matches your environment from the latest release.

OR

🐍 Run as python

Run the following script.

$ git clone https://github.com/takeyamayuki/NonMouse
$ cd NonMouse
$ pip install -r requirements.txt

If you have trouble installing mediapipe, please visit the official website.

† For mac, you need to add the location where you want to run it, such as Terminal or VScode, to the Security and Privacy Accessibility and Cammera section in System Preferences.

Usage

1. Install a camera

The following three ways of placing the device are assumed.

  • Normal: Place a webcam normally and point it at yourself (or use your laptop's built-in camera)

    Examples of installation methods Point the palm of your hand at the camera
    スクリーンショット 2021-09-13 午後5 33 21
    スクリーンショット 2021-09-13 午後5 33 21 スクリーンショット 2021-09-23 044041
  • Above: Place it above your hand and point it towards your hand.

    An example of installation methods Point the back of your hand at the camera.
    スクリーンショット 2021-09-13 午後5 33 21 スクリーンショット 2021-09-23 044243
  • Behind: Place it behind you and point it at the display.

    An example of installation methods Point the back of your hand at the camera.
    スクリーンショット 2021-09-13 午後5 33 21 スクリーンショット 2021-09-23 044403

2. Run

  • Run the executable as described in the GitHub wiki.

    OR

  • Run the following script from the continuation of the installation.

    For windows, linux(global hotkey function does not work in linux.)

    $ python3 app.py

    For MacOS, you need execute permission.

    $ sudo python3 app.py

3. Settings

When you run the program, You will see a screen similar to the following. On this screen, you can set the camera and sensitivity.

スクリーンショット 2021-12-02 154251

  • Camera
    Select a camera device. If multiple cameras are connected, try them in order, starting with the smallest number.

  • How to place
    Select the location where you placed the camera. Place the camera in one of the following positions: Normal, Above, Behind in [ 📷 Install a Camera].

  • Sensitivity
    Set the sensitivity. If set too high, the mouse cursor will shake slightly.

When you are done with the settings, click continue. The camera image will then be displayed, and you can use NonMouse with the settings you selected.

4. Hand Movements

stop cursor left click right click scroll
aaa aaa aaa aaa

The following hand movements are enabled only when you hold down Alt(Windows), Command(MacOS). You can define your own global hotkeys by rewriting here. You can use this function even if the window is not active.This feature is only available on windows and mac.

  • cursor
    • Mouse cursor: tip of index finger → A blue circle will appear at the tip of your index finger.
    • Stop mouse cursor: Attach the tip of your index finger to the tip of your middle finger. → The blue circle disappears.
  • left click
    • Left click: Attach the fingertips of your thumb to the second joint of your index finger → A yellow circle will appear on the tip of your index finger.
    • Left click release: Release the thumb fingertip and the second joint of the index finger. → The yellow circle disappears.
    • Double click: Left click twice within 0.5 seconds.
  • other
    • Right click: Hold the click state for 1.5 second without moving the cursor. → A red circle will appear at the tip of your index finger.
    • Scroll: Scroll with the index finger with the index finger folded → a black circle will appear.

† Use it with a bright light at hand.
† Keep your hand as straight as possible to the camera.

5. Quit

Press Ctrl+C, when a terminal window is active.
Press close button(Valid only on windows, linux) or Esc key, when an application window is active.

Build

† The built binary files can be downloaded from latest realease.

In app-mac.spec and app-win.spec, change pathex to fit your environment.
Run the following scripts for each OS.

  • windows

    Copy and paste the location obtained by pip show mediapipe into datas, referring to what is written originally.
    Run the following script.

    $ pip show mediapipe
    ...
    Location: c:\users\namik\appdata\local\programs\python\python37\lib\site_packages
    ...
    #Copy and paste into the datas in app-win.spec
    $ pyinstaller app-win.spec
    ... ````
  • mac

    Create a venv environment and perform pip install, because the directory specified in datas is for an assumed venv environment.

    $ python3 -m venv venv
    $ . venv/bin/activate
    (venv)$ pip install -r requirements.txt
    (venv)$ pyinstaller app-mac.spec
Comments
  • error: PyObjC requires macOS to build

    error: PyObjC requires macOS to build

    Hello. When installing requirements on Linux I get this error:

    Collecting pyobjc-core==7.3
      Downloading pyobjc-core-7.3.tar.gz (684 kB)
         ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 684.2/684.2 KB 5.4 MB/s eta 0:00:00
      Preparing metadata (setup.py) ... error
      error: subprocess-exited-with-error
      
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [2 lines of output]
          running egg_info
          error: PyObjC requires macOS to build
          [end of output]
      
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    
    × Encountered error while generating package metadata.
    ╰─> See above for output.
    
    opened by jbrepogmailcom 2
  • ZeroDivisionErrorでクラッシュする

    ZeroDivisionErrorでクラッシュする

    cfps = int(cap.get(cv2.CAP_PROP_FPS)) でcfpsが1になって if cfps < 30: cap.set(cv2.CAP_PROP_FRAME_WIDTH, cap_width) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, cap_height) cfps = int(cap.get(cv2.CAP_PROP_FPS)) を通してもcfpsが1になり、 ran = int(cfps/10) でranが0になってしまい、ZeroDivisionErrorでクラッシュします。

    環境は以下の通りです PC: MacBook Pro 2020 Intel OS: macOS Monterey カメラ: Mac内蔵Webカメラ Pythonバージョン: 3.9.7

    opened by takpika 2
  • MacOS MontereyでカメラFPSが5Hz程度になる

    MacOS MontereyでカメラFPSが5Hz程度になる

    https://github.com/takeyamayuki/NonMouse/blob/1e7f7bc5e9f52a29196ae631b4cb7e83a910aa27/app.py#L117-L121

    cfps = int(cap.get(cv2.CAP_PROP_FPS))で取得するカメラのFPSが5Hzになってしまい、カーソルが小刻みに揺れてしまう。 これは恐らく、cv2の問題だと思われる。

    opened by takeyamayuki 0
  • How to set screen size?

    How to set screen size?

    Hello the mouse only works on left two thirds of the screen. I gues it has something to do with screen size, my lapto has 1920x1080. Can I reconfigure it somewhere?

    opened by jbrepogmailcom 1
  • カメラや手が傾くと、自分の指先移動量に対するマウス移動量が異なってしまう

    カメラや手が傾くと、自分の指先移動量に対するマウス移動量が異なってしまう

    たとえば、カメラを斜めに取り付けると、カメラに映る指は縦方向に圧縮されたものになり、当然移動量もその分小さくなる。 現在の自分の環境だと手に向けてまっすぐ取り付けられているので問題はないが、初めて使う人にとってはこのアプリケーションを使う上での大きな壁となる。

    「解決策」 まず、基準となる指の関節座標を用意しておく。 その基準となる座標群と現在の座標群を比較して手の傾き具合を求める。 それによって、dx,dy変える

    opened by takeyamayuki 0
Releases(2.6.0)
This repo is the official implementation of "L2ight: Enabling On-Chip Learning for Optical Neural Networks via Efficient in-situ Subspace Optimization".

L2ight is a closed-loop ONN on-chip learning framework to enable scalable ONN mapping and efficient in-situ learning. L2ight adopts a three-stage learning flow that first calibrates the complicated p

Jiaqi Gu 9 Jul 14, 2022
Woosung Choi 63 Nov 14, 2022
Rainbow DQN implementation that outperforms the paper's results on 40% of games using 20x less data 🌈

Rainbow 🌈 An implementation of Rainbow DQN which reaches a median HNS of 205.7 after only 10M frames (the original Rainbow from Hessel et al. 2017 re

Dominik Schmidt 31 Dec 21, 2022
Pneumonia Detection using machine learning - with PyTorch

Pneumonia Detection Pneumonia Detection using machine learning. Training was done in colab: DEMO: Result (Confusion Matrix): Data I uploaded my datase

Wilhelm Berghammer 12 Jul 07, 2022
Anchor-free Oriented Proposal Generator for Object Detection

Anchor-free Oriented Proposal Generator for Object Detection Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, Junwei Han, Intro

jbwang1997 56 Nov 15, 2022
Official code for "Maximum Likelihood Training of Score-Based Diffusion Models", NeurIPS 2021 (spotlight)

Maximum Likelihood Training of Score-Based Diffusion Models This repo contains the official implementation for the paper Maximum Likelihood Training o

Yang Song 84 Dec 12, 2022
Awesome Human Pose Estimation

Human Pose Estimation Related Publication

Zhe Wang 1.2k Dec 26, 2022
Data from "HateCheck: Functional Tests for Hate Speech Detection Models" (Röttger et al., ACL 2021)

In this repo, you can find the data from our ACL 2021 paper "HateCheck: Functional Tests for Hate Speech Detection Models". "test_suite_cases.csv" con

Paul Röttger 43 Nov 11, 2022
Multi-task head pose estimation in-the-wild

Multi-task head pose estimation in-the-wild We provide C++ code in order to replicate the head-pose experiments in our paper https://ieeexplore.ieee.o

Roberto Valle 26 Oct 06, 2022
Gated-Shape CNN for Semantic Segmentation (ICCV 2019)

GSCNN This is the official code for: Gated-SCNN: Gated Shape CNNs for Semantic Segmentation Towaki Takikawa, David Acuna, Varun Jampani, Sanja Fidler

859 Dec 26, 2022
This repo is about to create the Streamlit application for given ML model.

HR-Attritiion-using-Streamlit This repo is about to create the Streamlit application for given ML model. Problem Statement: Managing peoples at workpl

Pavan Giri 0 Dec 10, 2021
A toolkit for document-level event extraction, containing some SOTA model implementations

❤️ A Toolkit for Document-level Event Extraction with & without Triggers Hi, there 👋 . Thanks for your stay in this repo. This project aims at buildi

Tong Zhu(朱桐) 159 Dec 22, 2022
PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation

PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation Winner method of the ICCV-2021 SemKITTI-DVPS Challenge. [arxiv] [

Yuan Haobo 38 Jan 03, 2023
Official Implementation of 'UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers' ICLR 2021(spotlight)

UPDeT Official Implementation of UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers (ICLR 2021 spotlight) The

hhhusiyi 96 Dec 22, 2022
Lightweight Face Image Quality Assessment

LightQNet This is a demo code of training and testing [LightQNet] using Tensorflow. Uncertainty Losses: IDQ loss PCNet loss Uncertainty Networks: Mobi

Kaen 5 Nov 18, 2022
Official code for our CVPR '22 paper "Dataset Distillation by Matching Training Trajectories"

Dataset Distillation by Matching Training Trajectories Project Page | Paper This repo contains code for training expert trajectories and distilling sy

George Cazenavette 256 Jan 05, 2023
This is the official PyTorch implementation of the CVPR 2020 paper "TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting".

TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting Project Page | YouTube | Paper This is the official PyTorch implementation of the C

Zhuoqian Yang 330 Dec 11, 2022
An algorithm that handles large-scale aerial photo co-registration, based on SURF, RANSAC and PyTorch autograd.

An algorithm that handles large-scale aerial photo co-registration, based on SURF, RANSAC and PyTorch autograd.

Luna Yue Huang 41 Oct 29, 2022
Implementation of light baking system for ray tracing based on Activision's UberBake

Vulkan Light Bakary MSU Graphics Group Student's Diploma Project Treefonov Andrey [GitHub] [LinkedIn] Project Goal The goal of the project is to imple

Andrey Treefonov 7 Dec 27, 2022
A library of extension and helper modules for Python's data analysis and machine learning libraries.

Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks. Sebastian Raschka 2014-2020 Links Doc

Sebastian Raschka 4.2k Jan 02, 2023