Implemented fully documented Particle Swarm Optimization algorithm (basic model with few advanced features) using Python programming language

Overview

Enhanced Particle Swarm Optimization (PSO) with Python

GitHub license GitHub issues

Implemented fully documented Particle Swarm Optimization (PSO) algorithm in Python which includes a basic model along with few advanced features such as updating inertia weight, cognitive, social learning coefficients and maximum velocity of the particle.

Dependencies

  • Numpy
  • matplotlib

Utilities

Once the installation is finished (download or cloning), go the pso folder and follow the below simple guidelines to execute PSO effectively (either write the code in command line or in a python editor).

>>> from pso import PSO

Next, a fitness function (or cost function) is required. I have included four different fitness functions for example purposes namely fitness_1, fitness_2, fitness_3, and fitness_4.

Fitness-1 (Himmelblau's Function)

Minimize: f(x) = (x2 + y - 11)2 + (x + y2 - 7)2

Optimum solution: x = 3 ; y = 2

Fitness-2 (Booth's Function)

Minimize: f(x) = (x + 2y - 7)2 + (2x + y - 5)2

Optimum solution: x = 1 ; y = 3

Fitness-3 (Beale's Function)

Minimize: f(x) = (1.5 - x - xy)2 + (2.25 - x + xy2)2 + (2.625 - x + xy3)2

Optimum solution: x = 3 ; y = 0.5

Fitness-4

Maximize: f(x) = 2xy + 2x - x2 - 2y2

Optimum solution: x = 2 ; y = 1

>>> from fitness import fitness_1, fitness_2, fitness_3, fitness_4

Now, if you want, you can provide an initial position X0 and bound value for all the particles (not mandatory) and optimize (minimize or maximize) the fitness function using PSO:

NOTE: a bool variable min=True (default value) for MINIMIZATION PROBLEM and min=False for MAXIMIZATION PROBLEM

>>> PSO(fitness=fitness_1, X0=[1,1], bound=[(-4,4),(-4,4)]).execute()

You will see the following similar output:

OPTIMUM SOLUTION
  > [3.0000078, 1.9999873]

OPTIMUM FITNESS
  > 0.0

When fitness_4 is used, observe that min=False since it is a Maximization problem.

>>> PSO(fitness=fitness_4, X0=[1,1], bound=[(-4,4),(-4,4)], min=False).execute()

You will see the following similar output:

OPTIMUM SOLUTION
  > [2.0, 1.0]

OPTIMUM FITNESS
  > 2.0

Incase you want to print the fitness value for each iteration, then set verbose=True (here Tmax=50 is the maximum iteration)

>>> PSO(fitness=fitness_2, Tmax=50, verbose=True).execute()

You will see the following similar output:

Iteration:   0  | best global fitness (cost): 18.298822
Iteration:   1  | best global fitness (cost): 1.2203953
Iteration:   2  | best global fitness (cost): 0.8178153
Iteration:   3  | best global fitness (cost): 0.5902262
Iteration:   4  | best global fitness (cost): 0.166928
Iteration:   5  | best global fitness (cost): 0.0926638
Iteration:   6  | best global fitness (cost): 0.0926638
Iteration:   7  | best global fitness (cost): 0.0114517
Iteration:   8  | best global fitness (cost): 0.0114517
Iteration:   9  | best global fitness (cost): 0.0114517
Iteration:   10 | best global fitness (cost): 0.0078867
Iteration:   11 | best global fitness (cost): 0.0078867
Iteration:   12 | best global fitness (cost): 0.0078867
Iteration:   13 | best global fitness (cost): 0.0078867
Iteration:   14 | best global fitness (cost): 0.0069544
Iteration:   15 | best global fitness (cost): 0.0063058
Iteration:   16 | best global fitness (cost): 0.0063058
Iteration:   17 | best global fitness (cost): 0.0011039
Iteration:   18 | best global fitness (cost): 0.0011039
Iteration:   19 | best global fitness (cost): 0.0011039
Iteration:   20 | best global fitness (cost): 0.0011039
Iteration:   21 | best global fitness (cost): 0.0007225
Iteration:   22 | best global fitness (cost): 0.0005875
Iteration:   23 | best global fitness (cost): 0.0001595
Iteration:   24 | best global fitness (cost): 0.0001595
Iteration:   25 | best global fitness (cost): 0.0001595
Iteration:   26 | best global fitness (cost): 0.0001595
Iteration:   27 | best global fitness (cost): 0.0001178
Iteration:   28 | best global fitness (cost): 0.0001178
Iteration:   29 | best global fitness (cost): 0.0001178
Iteration:   30 | best global fitness (cost): 0.0001178
Iteration:   31 | best global fitness (cost): 0.0001178
Iteration:   32 | best global fitness (cost): 0.0001178
Iteration:   33 | best global fitness (cost): 0.0001178
Iteration:   34 | best global fitness (cost): 0.0001178
Iteration:   35 | best global fitness (cost): 0.0001178
Iteration:   36 | best global fitness (cost): 0.0001178
Iteration:   37 | best global fitness (cost): 2.91e-05
Iteration:   38 | best global fitness (cost): 1.12e-05
Iteration:   39 | best global fitness (cost): 1.12e-05
Iteration:   40 | best global fitness (cost): 1.12e-05
Iteration:   41 | best global fitness (cost): 1.12e-05
Iteration:   42 | best global fitness (cost): 1.12e-05
Iteration:   43 | best global fitness (cost): 1.12e-05
Iteration:   44 | best global fitness (cost): 1.12e-05
Iteration:   45 | best global fitness (cost): 1.12e-05
Iteration:   46 | best global fitness (cost): 1.12e-05
Iteration:   47 | best global fitness (cost): 2.4e-06
Iteration:   48 | best global fitness (cost): 2.4e-06
Iteration:   49 | best global fitness (cost): 2.4e-06
Iteration:   50 | best global fitness (cost): 2.4e-06

OPTIMUM SOLUTION
  > [1.0004123, 2.9990281]

OPTIMUM FITNESS
  > 2.4e-06

Now, incase you want to plot the fitness value for each iteration, then set plot=True (here Tmax=50 is the maximum iteration)

>>> PSO(fitness=fitness_2, Tmax=50, plot=True).execute()

You will see the following similar output:

OPTIMUM SOLUTION
  > [1.0028365, 2.9977422]

OPTIMUM FITNESS
  > 1.45e-05

Fitness

Finally, in case you want to use the advanced features as mentioned above (say you want to update the weight inertia parameter w), simply use update_w=True and thats it. Similarly you can use update_c1=True (to update individual cognitive parameter c1), update_c2=True (to update social learning parameter c2), and update_vmax=True (to update maximum limited velocity of the particle vmax)

>>> PSO(fitness=fitness_1, update_w=True, update_c1=True).execute()

References:

[1] Almeida, Bruno & Coppo leite, Victor. (2019). Particle swarm optimization: a powerful technique for solving engineering problems. 10.5772/intechopen.89633.

[2] He, Yan & Ma, Wei & Zhang, Ji. (2016). The parameters selection of pso algorithm influencing on performance of fault diagnosis. matec web of conferences. 63. 02019. 10.1051/matecconf/20166302019.

[3] Clerc, M., and J. Kennedy. The particle swarm — explosion, stability, and convergence in a multidimensional complex space. ieee transactions on evolutionary computation 6, no. 1 (february 2002): 58–73.

[4] Y. H. Shi and R. C. Eberhart, “A modified particle swarm optimizer,” in proceedings of the ieee international conferences on evolutionary computation, pp. 69–73, anchorage, alaska, usa, may 1998.

[5] G. Sermpinis, K. Theofilatos, A. Karathanasopoulos, E. F. Georgopoulos, & C. Dunis, Forecasting foreign exchange rates with adaptive neural networks using radial-basis functions and particle swarm optimization, european journal of operational research.

[6] Particle swarm optimization (pso) visually explained (https://towardsdatascience.com/particle-swarm-optimization-visually-explained-46289eeb2e14)

[7] Rajib Kumar Bhattacharjya, Introduction to Particle Swarm Optimization (http://www.iitg.ac.in/rkbc/ce602/ce602/particle%20swarm%20algorithms.pdf)

This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports"

Introduction: X-Ray Report Generation This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports". O

no name 36 Dec 16, 2022
Code repository for the paper "Doubly-Trained Adversarial Data Augmentation for Neural Machine Translation" with instructions to reproduce the results.

Doubly Trained Neural Machine Translation System for Adversarial Attack and Data Augmentation Languages Experimented: Data Overview: Source Target Tra

Steven Tan 1 Aug 18, 2022
Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style

Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style [NeurIPS 2021] Official code to reproduce the results and data p

Yash Sharma 27 Sep 19, 2022
Agent-based model simulator for air quality and pandemic risk assessment in architectural spaces

Agent-based model simulation for air quality and pandemic risk assessment in architectural spaces. User Guide archABM is a fast and open source agent-

Vicomtech 10 Dec 05, 2022
Coursera - Quiz & Assignment of Coursera

Coursera Assignments This repository is aimed to help Coursera learners who have difficulties in their learning process. The quiz and programming home

浅梦 828 Jan 04, 2023
Using Clinical Drug Representations for Improving Mortality and Length of Stay Predictions

Using Clinical Drug Representations for Improving Mortality and Length of Stay Predictions Usage Clone the code to local. https://github.com/tanlab/MI

Computational Biology and Machine Learning lab @ TOBB ETU 3 Oct 18, 2022
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

Adelaide Intelligent Machines (AIM) Group 7 Sep 12, 2022
[CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos

Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos Created by Muheng Li, Lei Chen, Yueqi Duan, Zhilan Hu, Jianjiang Feng, Jie

58 Dec 23, 2022
[ACM MM 2021] TSA-Net: Tube Self-Attention Network for Action Quality Assessment

Tube Self-Attention Network (TSA-Net) This repository contains the PyTorch implementation for paper TSA-Net: Tube Self-Attention Network for Action Qu

ShunliWang 18 Dec 23, 2022
Rainbow DQN implementation that outperforms the paper's results on 40% of games using 20x less data 🌈

Rainbow 🌈 An implementation of Rainbow DQN which reaches a median HNS of 205.7 after only 10M frames (the original Rainbow from Hessel et al. 2017 re

Dominik Schmidt 31 Dec 21, 2022
Implementation of the famous Image Manipulation\Forgery Detector "ManTraNet" in Pytorch

Who has never met a forged picture on the web ? No one ! Everyday we are constantly facing fake pictures touched up in Photoshop but it is not always

Rony Abecidan 77 Dec 16, 2022
Easy-to-use micro-wrappers for Gym and PettingZoo based RL Environments

SuperSuit introduces a collection of small functions which can wrap reinforcement learning environments to do preprocessing ('microwrappers'). We supp

Farama Foundation 357 Jan 06, 2023
Code repository for our paper "Learning to Generate Scene Graph from Natural Language Supervision" in ICCV 2021

Scene Graph Generation from Natural Language Supervision This repository includes the Pytorch code for our paper "Learning to Generate Scene Graph fro

Yiwu Zhong 64 Dec 24, 2022
g9.py - Torch interactive graphics

g9.py - Torch interactive graphics A Torch toy in the browser. Demo at https://srush.github.io/g9py/ This is a shameless copy of g9.js, written in Pyt

Sasha Rush 13 Nov 16, 2022
Implementation of the paper "Generating Symbolic Reasoning Problems with Transformer GANs"

Generating Symbolic Reasoning Problems with Transformer GANs This is the implementation of the paper Generating Symbolic Reasoning Problems with Trans

Reactive Systems Group 1 Apr 18, 2022
CLIP2Video: Mastering Video-Text Retrieval via Image CLIP

CLIP2Video: Mastering Video-Text Retrieval via Image CLIP The implementation of paper CLIP2Video: Mastering Video-Text Retrieval via Image CLIP. CLIP2

168 Dec 29, 2022
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with ONNX, TensorRT, ncnn, and OpenVINO supported.

Introduction YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and ind

7.7k Jan 03, 2023
A library for hidden semi-Markov models with explicit durations

hsmmlearn hsmmlearn is a library for unsupervised learning of hidden semi-Markov models with explicit durations. It is a port of the hsmm package for

Joris Vankerschaver 69 Dec 20, 2022
A script that trains a model to recognize handwritten digits using the MNIST data set.

handwritten-digits-recognition A script that trains a model to recognize handwritten digits using the MNIST data set. Then it loads external files and

Hamza Sayih 1 Oct 30, 2021