Simulation-based performance analysis of server-less Blockchain-enabled Federated Learning

Overview

Blockchain-enabled Server-less Federated Learning

Repository containing the files used to reproduce the results of the publication "Blockchain-enabled Server-less Federated Learning".

''BibTeX'' citation:

@article{wilhelmi2021blockchain,
  title={Blockchain-enabled Server-less Federated Learning},
  author={Wilhelmi, Francesc, Giupponi, Lorenza and Dini, Paolo},
  journal={arXiv preprint arXiv:2112.07938
},
  year={2021}
}

Table of Contents

Authors

Abstract

Motivated by the heterogeneous nature of devices participating in large-scale Federated Learning (FL) optimization, we focus on an asynchronous server-less FL solution empowered by Blockchain (BC) technology. In contrast to mostly adopted FL approaches, which assume synchronous operation, we advocate an asynchronous method whereby model aggregation is done as clients submit their local updates. The asynchronous setting fits well with the federated optimization idea in practical large-scale settings with heterogeneous clients. Thus, it potentially leads to higher efficiency in terms of communication overhead and idle periods. To evaluate the learning completion delay of BC-enabled FL, we provide an analytical model based on batch service queue theory. Furthermore, we provide simulation results to assess the performance of both synchronous and asynchronous mechanisms. Important aspects involved in the BC-enabled FL optimization, such as the network size, link capacity, or user requirements, are put together and analyzed. As our results show, the synchronous setting leads to higher prediction accuracy than the asynchronous case. Nevertheless, asynchronous federated optimization provides much lower latency in many cases, thus becoming an appealing FL solution when dealing with large data sets, tough timing constraints (e.g., near-real-time applications), or highly varying training data.

Repository description

This repository contains the resources used to generate the results included in the paper entitled "Blockchain-enabled Server-less Federated Learning". The files included in this repository are:

  1. LaTeX files: contains the files used to generate the manuscript.
  2. Code & Results: scripts and code used to generate the results included in the paper.
  • Queue code: scripts used to execute the Blockchain queuing delay simulations through the batch-service queue simulator.
  • TensorFlow code: python scripts used to execute the FL mechanisms through TensorFlowFederated.
  • Matlab code: matlab scripts used to process the results and plot the figures included in the manuscript.
  • Outputs: files containing the outputs from the different resources (queue simulator, TFF).
  • Figures: figures included in the manuscript and others with preliminary results.

Usage

Part 1: Batch service queue analysis

To generate the results related to the analysis of the queueing delay in the Blockchain, we used our batch-service queue simulator (commit: f846b66). Please, refer to that repository's documentation for installation/execution guidelines. As for the corresponding theoretical background, more details can be found in [1].

The obtained results from this part can be found at "Matlab code/output_queue_simulator". To reproduce them, execute the scripts from the "Batch service queue" folder in the batch-service queue simulator.

Part 2: FLchain analysis

Tensorflow Federated (TFF) has been used to evaluate the proposed s-FLchain and a-FLchain mechanisms in the manuscript. To get started with TF (and TFF), we strongly recommend using the tutorials in https://www.tensorflow.org/federated/tutorials/tutorials_overview.

Once the TFF environment has been setup, our results can be reproduced by using the scripts in "TensorFlow code":

  1. centalized_baseline.py: centralized ML model for getting baseline results (upper/lower bounds).
  2. sFLchain_vs_aFLchain.py: script generating the output for the comparison of the synchronous and the asynchronous models.

The output results from this part can be found at "Matlab code/output_tensorflow".

Part 3: End-to-end analysis framework

Finally, to gather all the resources together, we have used the end-to-end latency framework contained in this repository ("Matlab code/simulation_scripts"). Those files contain the communication and computation models used to calculate the total latency experienced by each considered Blockchain-enabled FL mechanism. Moreover, to get the end-to-end latency and accuracy results, the abovementioned scripts gather and process the outputs obtained from both batch-service queue simulator and TFF.

Content:

  1. 0_preliminary_results: evaluation of several FL parameters via TFF (out of the scope of this publication).
  2. 1_blockchain_analysis: evaluation of the Blockchain queuing delay (refer to Part 1: Batch service queue analysis).
  3. 2_flchain: evaluation of the FL accuracy (refer to Part 2: FLchain analysis) and end-to-end latency analysis. Includes models to compute communication and computation-related delays.

Performance Evaluation

Simulation parameters

The simulation parameters used in the publication are as follows:

Parameter Value
Number of miners 19
Transaction size 5 kbits
BC Block header size 20 kbits
Max. waiting time 1000 seconds
Queue length 1000 packets
--------- --------------------------------------- ----------------------
Min/max distance Client-BS 0/4.15 meters
Bandwidth. 180 kHz
Min/max distance Client-BS 2 GHz
Min/max distance Client-BS 0 dBi
Comm. Loss at the reference distance (P_L0) 5 dB
Path-loss exponent (α) 4.4
Shadowing factor (σ) 9.5
Obstacles factor (γ) 30
Ground noise -95 dBm
Capacity P2P links 5 Mbps
--------- --------------------------------------- ----------------------
Learning algorithm Neural Network
Number of hidden layers 2
Activation function ReLU
Optimizer SGD
Loss function Cat. cross-entropy
ML Learning rate (local/global) 0.01/1
Epochs number 5
Batch size 20
CPU cycles to process a data point 10^-5
Clients' clock speed 1 GHz

Simulation Results

In what follows, we present the results presented in the manuscript. First, we refer to the Blockchain queuing delay analysis, where we assess the sensitivity of the Blockchain on various parameters, including the block size, the mining rate, the traffic intensity, or the miners' communication capacity.

Next, we provide a broader vision of the Blockchain transaction confirmation latency by including other delays different than the queuing delay, such as transaction upload, block generation, or block propagation.

Finally, we present the results obtained for the evaluation of s-FLchain and a-FLchain in terms of learning accuracy and learning completion time:

References

[1] Wilhelmi, F., & Giupponi, L. (2021). Discrete-Time Analysis of Wireless Blockchain Networks. arXiv preprint arXiv:2104.05586.

Contribute

If you want to contribute, please contact to [email protected].

Owner
Francesc Wilhelmi
PhD Student at the Wireless Networking Research Group (Universitat Pompeu Fabra)
Francesc Wilhelmi
A unified framework to jointly model images, text, and human attention traces.

connect-caption-and-trace This repository contains the reference code for our paper Connecting What to Say With Where to Look by Modeling Human Attent

Meta Research 73 Oct 24, 2022
Official pytorch implementation of "Scaling-up Disentanglement for Image Translation", ICCV 2021.

Official pytorch implementation of "Scaling-up Disentanglement for Image Translation", ICCV 2021.

Aviv Gabbay 41 Nov 29, 2022
Deep universal probabilistic programming with Python and PyTorch

Getting Started | Documentation | Community | Contributing Pyro is a flexible, scalable deep probabilistic programming library built on PyTorch. Notab

7.7k Dec 30, 2022
Reproduces the results of the paper "Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations".

Finite basis physics-informed neural networks (FBPINNs) This repository reproduces the results of the paper Finite Basis Physics-Informed Neural Netwo

Ben Moseley 65 Dec 28, 2022
Code of the paper "Deep Human Dynamics Prior" in ACM MM 2021.

Code of the paper "Deep Human Dynamics Prior" in ACM MM 2021. Figure 1: In the process of motion capture (mocap), some joints or even the whole human

Shinny cui 3 Oct 31, 2022
Learning Neural Network Subspaces

Learning Neural Network Subspaces Welcome to the codebase for Learning Neural Network Subspaces by Mitchell Wortsman, Maxwell Horton, Carlos Guestrin,

Apple 117 Nov 17, 2022
[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning | 斗地主AI

[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning DouZero is a reinforcement learning framework for DouDizhu (斗地主), t

Kwai Inc. 3.1k Jan 04, 2023
Image Lowpoly based on Centroid Voronoi Diagram via python-opencv and taichi

CVTLowpoly: Image Lowpoly via Centroid Voronoi Diagram Image Sharp Feature Extraction using Guide Filter's Local Linear Theory via opencv-python. The

Pupa 4 Jul 29, 2022
Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021.

Conformal time-series forecasting Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021. If you use our code in yo

Kamilė Stankevičiūtė 36 Nov 21, 2022
Author's PyTorch implementation of TD3 for OpenAI gym tasks

Addressing Function Approximation Error in Actor-Critic Methods PyTorch implementation of Twin Delayed Deep Deterministic Policy Gradients (TD3). If y

Scott Fujimoto 1.3k Dec 25, 2022
SAGE: Sensitivity-guided Adaptive Learning Rate for Transformers

SAGE: Sensitivity-guided Adaptive Learning Rate for Transformers This repo contains our codes for the paper "No Parameters Left Behind: Sensitivity Gu

Chen Liang 23 Nov 07, 2022
Code Repo for the ACL21 paper "Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning"

Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning This is the Github repository of our paper, "Common S

INK Lab @ USC 19 Nov 30, 2022
Code Repository for The Kaggle Book, Published by Packt Publishing

The Kaggle Book Data analysis and machine learning for competitive data science Code Repository for The Kaggle Book, Published by Packt Publishing "Lu

Packt 1.6k Jan 07, 2023
Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization Official PyTorch implementation of the Fishr regularization for out-of-dist

62 Dec 22, 2022
HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision

HugsVision is an open-source and easy to use all-in-one huggingface wrapper for computer vision. The goal is to create a fast, flexible and user-frien

Labrak Yanis 166 Nov 27, 2022
Styled Handwritten Text Generation with Transformers (ICCV 21)

⚡ Handwriting Transformers [PDF] Ankan Kumar Bhunia, Salman Khan, Hisham Cholakkal, Rao Muhammad Anwer, Fahad Shahbaz Khan & Mubarak Shah Abstract: We

Ankan Kumar Bhunia 85 Dec 22, 2022
🤖 Project template for your next awesome AI project. 🦾

🤖 AI Awesome Project Template 👋 Template author You may want to adjust badge links in a README.md file. 💎 Installation with pip Installation is as

Wiktor Łazarski 18 Nov 23, 2022
Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechanism

Period-alternatives-of-Softmax Experimental Demo for our paper 'Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechani

slwang9353 0 Sep 06, 2021
NR-GAN: Noise Robust Generative Adversarial Networks

Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter Code and checkpoints for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling

Takuhiro Kaneko 59 Dec 11, 2022
Official PaddlePaddle implementation of Paint Transformer

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [Paddle Implementation] Update We have optimized the serial inference p

TianweiLin 284 Dec 31, 2022