Code for "My(o) Armband Leaks Passwords: An EMG and IMU Based Keylogging Side-Channel Attack" paper

Overview

Myo Keylogging

This is the source code for our paper My(o) Armband Leaks Passwords: An EMG and IMU Based Keylogging Side-Channel Attack by Matthias Gazzari, Annemarie Mattmann, Max Maass and Matthias Hollick in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Volume 5, Issue 4, 2021.

We include the software used for recording the dataset (record folder) and the software for training and running the neural networks (ml folder) as well as analyzing the results (analysis folder). The scripts folder provides some helper scripts for automating batches of hyperparameter optimization, model fitting, analyses and more. The results folder includes a pickled version of the predictions of our models, on which analyses can be run, e.g. to reproduce the paper results.

Installation

To install the project, first clone the repository and change directory into the fresh clone:

git clone https://github.com/seemoo-lab/myo-keylogging.git
cd myo-keylogging

You can use a python virtual environment (or any other virtual environment of your choice):

mkvirtualenv myo --system-site-packages
workon myo

To make sure you have the newest software versions you can run an upgrade:

pip install --upgrade pip setuptools

To install the requirements run:

pip install -r requirements.txt

Finally, import the training and test data into the project. The top level folder should include a folder train-data with all the records for training the models and a folder test-data with all the records for testing the models.

wget https://zenodo.org/record/5594651/files/myo-keylogging-dataset.zip
unzip myo-keylogging-dataset.zip

Using the record library, you can add you can extend this dataset.

Rerun of Results

To reproduce our results from the provided predictions of our models, go to the top level directory and run:

./scripts/create_results.sh

This will recreate all performance value files and plots in the subfolders of the results folder as used in the paper.

Run the following to list the fastest and slowest typists in order to determine their class imbalance in the results/train-data-skew.csv and the results/test-data-skew.csv files:

python -m analysis exp_key_data

To recreate the provided predictions and class skew files, execute the following from the top level directory:

./scripts/create_models.sh
./scripts/create_predictions.sh
./scripts/create_class_skew_files.sh

This will fit the models with the current choice of hyperparameters and run each model on the test dataset to create the required predictions for analysis. Additionally, the class skew files will be recreated.

To run the hyperparameter optimization either run the run_shallow_hpo.sh script or, alternatively, the slurm_run_shallow_hpo.sh script when on a SLURM cluster.

sbatch scripts/slurm_run_shallow_hpo.sh
./scripts/run_shallow_hpo.sh

Afterwards you can use the merge_shallow_hpo_runs.py script to combine the results for easier evaluation of the hyperparameters.

Fit Models

In order to fit and analyze your own models, go to the top level directory and run any of:

python -m ml crnn
python -m ml resnet
python -m ml resnet11
python -m ml wavenet

This will fit the respective model with the default parameters and in binary mode for keystroke detection. In order to fit multiclass models for keystroke identification, use the encoding parameter, e.g.:

python -m ml crnn --encoding "multiclass"

In order to test specific sensors, ignore the others (note that quaternions are ignored by default), e.g. to use only EMG on a CRNN model, use:

python -m ml crnn --ignore "quat" "acc" "gyro"

To run a hyperparameter optimization, run e.g.:

python -m ml crnn --func shallow_hpo --step 5

To gain more information on possible parameters, run e.g.:

python -m ml crnn --help

Some parameters for the neural networks are fixed in the code.

Analyze Models

In order to analyze your models, run apply_models to create the predictions as pickled files. On these you can run further analyses found in the analysis folder.

To run apply_models on a binary model, do:

python -m analysis apply_models --model_path results/<PATH_TO_MODEL> --encoding binary --data_path test-data/ --save_path results/<PATH_TO_PKL> --save_only --basenames <YOUR MODELS>

To run a multiclass model, do:

python -m analysis apply_models --model_path results/<PATH_TO_MODEL> --encoding multiclass --data_path test-data/ --save_path results/<PATH_TO_PKL> --save_only --basenames <YOUR MODELS>

To chain a binary and multiclass model, do e.g.:

python -m analysis apply_models --model_path results/<PATH_TO_MODEL> --encoding chain --data_path test-data/ --save_path results/<PATH_TO_PKL> --save_only --basenames <YOUR MODELS> --tolerance 10

Further parameters interesting for analyses may be a filter on the users with the parameter (--users known or --users unknown) or on the data (--data known or --data unknown) to include only users (not) in the training data or include only data typed by all or no other user respectively.

For more information, run:

python -m analysis apply_models --help

To later recreate model performance results and plots, run:

python -m analysis apply_models --encoding <ENCODING> --load_results results/<PATH_TO_PKL> --save_path results/<PATH_TO_PKL> --save_only

with the appropriate encoding of the model used to create the pickled results.

To run further analyses on the generated predictions, create or choose your analysis from the analysis folder and run:

python -m analysis <ANALYSIS_NAME>

Refer to the help for further information:

python -m analysis <ANALYSIS_NAME> --help

Record Data

In order to record your own data(set), switch to the record folder. To record sensor data with our recording software, you will need one to two Myo armbands connected to your computer. Then, you can start a training data recording, e.g.:

python tasks.py -s 42 -l german record touch_typing --left_tty <TTY_LEFT_MYO> --left_mac <MAC_LEFT_MYO> --right_tty <TTY_RIGHT_MYO> --right_mac <MAC_RIGHT_MYO> --kb_model TADA68_DE

for a German recording with seed 42, a touch typist and a TADA68 German physical keyboard layout or

python tasks.py -s 42 -l english record touch_typing --left_tty <TTY_LEFT_MYO> --left_mac <MAC_LEFT_MYO> --right_tty <TTY_RIGHT_MYO> --right_mac <MAC_RIGHT_MYO> --kb_model TADA68_US

for an English recording with seed 42, a hybrid typist and a TADA68 English physical keyboard layout.

In order to start a test data recording, simply run the passwords.py instead of the tasks.py.

After recording training data, please execute the following script to complete the meta data:

python update_text_meta.py -p ../train-data/

After recording test data, please execute the following two scripts to complete the meta data:

python update_pw_meta.py -p ../test-data/
python update_cuts.py -p ../test-data/

For further information, check:

python tasks.py --help
python passwords.py --help

Note that the recording software includes text extracts as outlined in the acknowledgments below.

Links

Acknowledgments

This work includes the following external materials to be found in the record folder:

  1. Various texts from Wikipedia available under the CC-BY-SA 3.0 license.
  2. The EFF's New Wordlists for Random Passphrases available under the CC-BY 3.0 license.
  3. An extract of the Top 1000 most common passwords by Daniel Miessler, Jason Haddix, and g0tmi1k available under the MIT license.

License

This software is licensed under the GPLv3 license, please also refer to the LICENSE file.

Owner
Secure Mobile Networking Lab
Secure Mobile Networking Lab
A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.

Layer-wise Relevance Propagation (LRP) in PyTorch Basic unsupervised implementation of Layer-wise Relevance Propagation (Bach et al., Montavon et al.)

Kai Fabi 28 Dec 26, 2022
Examples of using f2py to get high-speed Fortran integrated with Python easily

f2py Examples Simple examples of using f2py to get high-speed Fortran integrated with Python easily. These examples are also useful to troubleshoot pr

Michael 35 Aug 21, 2022
NAACL2021 - COIL Contextualized Lexical Retriever

COIL Repo for our NAACL paper, COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List. The code covers learning

Luyu Gao 108 Dec 31, 2022
Repo for the paper Extrapolating from a Single Image to a Thousand Classes using Distillation

Extrapolating from a Single Image to a Thousand Classes using Distillation by Yuki M. Asano* and Aaqib Saeed* (*Equal Contribution) Extrapolating from

Yuki M. Asano 16 Nov 04, 2022
Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"

Deformable Attention Implementation of Deformable Attention from this paper in Pytorch, which appears to be an improvement to what was proposed in DET

Phil Wang 128 Dec 24, 2022
The source code for Adaptive Kernel Graph Neural Network at AAAI2022

AKGNN The source code for Adaptive Kernel Graph Neural Network at AAAI2022. Please cite our paper if you think our work is helpful to you: @inproceedi

11 Nov 25, 2022
Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)

Complete-IoU Loss and Cluster-NMS for Improving Object Detection and Instance Segmentation. Our paper is accepted by IEEE Transactions on Cybernetics

290 Dec 25, 2022
Code for paper 'Hand-Object Contact Consistency Reasoning for Human Grasps Generation' at ICCV 2021

GraspTTA Hand-Object Contact Consistency Reasoning for Human Grasps Generation (ICCV 2021). Project Page with Videos Demo Quick Results Visualization

Hanwen Jiang 47 Dec 09, 2022
Bayesian algorithm execution (BAX)

Bayesian Algorithm Execution (BAX) Code for the paper: Bayesian Algorithm Execution: Estimating Computable Properties of Black-box Functions Using Mut

Willie Neiswanger 38 Dec 08, 2022
Neural machine translation between the writings of Shakespeare and modern English using TensorFlow

Shakespeare translations using TensorFlow This is an example of using the new Google's TensorFlow library on monolingual translation going from modern

Motoki Wu 245 Dec 28, 2022
PyTorch implementation of federated learning framework based on the acceleration of global momentum

Federated Learning with Acceleration of Global Momentum PyTorch implementation of federated learning framework based on the acceleration of global mom

0 Dec 23, 2021
Learning hierarchical attention for weakly-supervised chest X-ray abnormality localization and diagnosis

Hierarchical Attention Mining (HAM) for weakly-supervised abnormality localization This is the official PyTorch implementation for the HAM method. Pap

Xi Ouyang 22 Jan 02, 2023
BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation

BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation This is a demo implementation of BYOL for Audio (BYOL-A), a self-sup

NTT Communication Science Laboratories 160 Jan 04, 2023
Plugin adapted from Ultralytics to bring YOLOv5 into Napari

napari-yolov5 Plugin adapted from Ultralytics to bring YOLOv5 into Napari. Training and detection can be done using the GUI. Training dataset must be

2 May 05, 2022
A small fun project using python OpenCV, mediapipe, and pydirectinput

Here I tried a small fun project using python OpenCV, mediapipe, and pydirectinput. Here we can control moves car game when yellow color come to right box (press key 'd') left box (press key 'a') lef

Sameh Elisha 3 Nov 17, 2022
A light weight data augmentation tool for training CNNs and Viola Jones detectors

hey-daug A light weight data augmentation tool for training CNNs and Viola Jones detectors (Haar Cascades). This tool inflates your data by up to six

Jaiyam Sharma 2 Nov 23, 2019
Run PowerShell command without invoking powershell.exe

PowerLessShell PowerLessShell rely on MSBuild.exe to remotely execute PowerShell scripts and commands without spawning powershell.exe. You can also ex

Mr.Un1k0d3r 1.2k Jan 03, 2023
This program will stylize your photos with fast neural style transfer.

Neural Style Transfer (NST) Using TensorFlow Demo TensorFlow TensorFlow is an end-to-end open source platform for machine learning. It has a comprehen

Ismail Boularbah 1 Aug 08, 2022
Companion repository to the paper accepted at the 4th ACM SIGSPATIAL International Workshop on Advances in Resilient and Intelligent Cities

Transfer learning approach to bicycle sharing systems station location planning using OpenStreetMap Companion repository to the paper accepted at the

Politechnika Wrocławska - repozytorium dla informatyków 4 Oct 24, 2022
OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.

English | 简体中文 Documentation: https://mmtracking.readthedocs.io/ Introduction MMTracking is an open source video perception toolbox based on PyTorch.

OpenMMLab 2.7k Jan 08, 2023