Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)

Related tags

Text Data & NLPTOPSIS
Overview

TOPSIS implementation in Python

Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) CHING-LAI Hwang and Yoon introduced TOPSIS in 1981 in their Multiple Criteria Decision Making (MCDM) and Multiple Criteria Decision Analysis (MCDA) methods [1]. TOPSIS strives to minimize the distance between the positive ideal solution and the chosen alternative, as well as to maximize the distance between the negative ideal solution and the chosen alternative. [2]. TOPSIS, in a nutshell, aids researchers to rank alternative items by identifying some criteria. We present alternative information and the criteria for each in the following decision matrix: image It is possible that some criteria are more effective than others. Therefore, some weights are given to their importance. It is required that the summation of n weights equals one.

image

Jahanshahloo et al, (2006), explained the TOPSIS in six main phases as follows:

1) Normalized Decision Matrix

It is the first phase of TOPSIS to normalize the process. Researchers have proposed different types of normalization. In this section, we identify the most commonly used normalization methods. The criterion or attribute is divided into two categories, cost and benefit. There are two formulas for normalizing the decision matrix for each normalization method: one for benefit criteria and one for cost criteria. According to Vafaei et al (2018), some of these normalization methods include:

image

All of the above normalization methods were coded in Normalization.py. Also, there is another related file called Normalized_Decision_Matrix.py, implementing the normalization method on the decision matrix. Now we have anormalized decision matrix as follows:

image

2) Weighted Normalized Decision Matrix

The Weighted Normalized Decision Matrix is calculated by multiplying the normalized decision matrix by the weights.

image

This multiplication is performed in the Weighted_Normalized_Decision_Matrix.py file. Now, we have a weighted normalized decision matrix as follows:

image

3) Ideal Solutions

As was mentioned, TOPSIS strives to minimize the distance between the positive ideal solution and the chosen alternative, as well as to maximize the distance between the negative ideal solution and the chosen alternative. But what are the positive and negative ideal solutions?

If our attribute or criterion is profit-based, positive ideal solution (PIS) and negative ideal solution (NIS) are:

image

If our attribute or criterion is cost-based, positive ideal solution (PIS) and negative ideal solution (NIS) are:

image

In our code, ideal solutions are calculated in Ideal_Solution.py.

  1. Separation measures It is necessary to introduce a measure that can measure how far alternatives are from the ideal solutions. Our measure comprise two main sections: The separation of each alternative from the PIS is calculated as follows:

image

Also, the separation of each alternative from the NIS is calculated as follows:

image

  1. Closeness to the Ideal Solution Now that the distance between ideal solutions and alternatives has been calculated, we rank our alternatives according to how close they are to ideal solutions. The distance measure is calculated by the following formula:

image

It is clear that :

image

6) Ranking

Now, alternatives are ranked in decreasing order based on closeness to the ideal solution. Both of (5) and (6) are calculated in Distance_Between_Ideal_and_Alternatives.py.

7) TOPSIS

In this section, all of the previous .py files are employed and utilized in an integrated way.

References

  1. Hwang, C.L.; Yoon, K. (1981). Multiple Attribute Decision Making: Methods and Applications. New York: Springer-Verlag.: https://www.springer.com/gp/book/9783540105589
  2. Assari, A., Mahesh, T., & Assari, E. (2012b). Role of public participation in sustainability of historical city: usage of TOPSIS method. Indian Journal of Science and Technology, 5(3), 2289-2294.
  3. Jahanshahloo, G.R., Lotfi, F.H. and Izadikhah, M., 2006. An algorithmic method to extend TOPSIS for decision-making problems with interval data. Applied mathematics and computation, 175(2), pp.1375-1384.
  4. Vafaei, N., Ribeiro, R.A. and Camarinha-Matos, L.M., 2018. Data normalization techniques in decision making: case study with TOPSIS method. International journal of information and decision sciences, 10(1), pp.19-38.
Owner
Hamed Baziyad
Hamed Baziyad
Built for cleaning purposes in military institutions

Ferramenta do AL Construído para fins de limpeza em instituições militares. Instalação Requer python = 3.2 pip install -r requirements.txt Usagem Exe

0 Aug 13, 2022
Twewy-discord-chatbot - Build a Discord AI Chatbot that Speaks like Your Favorite Character

Build a Discord AI Chatbot that Speaks like Your Favorite Character! This is a Discord AI Chatbot that uses the Microsoft DialoGPT conversational mode

Lynn Zheng 231 Dec 30, 2022
An open-source NLP library: fast text cleaning and preprocessing.

An open-source NLP library: fast text cleaning and preprocessing

Iaroslav 21 Mar 18, 2022
基于GRU网络的句子判断程序/A program based on GRU network for judging sentences

SentencesJudger SentencesJudger 是一个基于GRU神经网络的句子判断程序,基本的功能是判断文章中的某一句话是否为一个优美的句子。 English 如何使用SentencesJudger 确认Python运行环境 安装pyTorch与LTP python3 -m pip

8 Mar 24, 2022
:hot_pepper: R²SQL: "Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing." (AAAI 2021)

R²SQL The PyTorch implementation of paper Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing. (AAAI 2021) Requirement

huybery 60 Dec 31, 2022
Under the hood working of transformers, fine-tuning GPT-3 models, DeBERTa, vision models, and the start of Metaverse, using a variety of NLP platforms: Hugging Face, OpenAI API, Trax, and AllenNLP

Transformers-for-NLP-2nd-Edition @copyright 2022, Packt Publishing, Denis Rothman Contact me for any question you have on LinkedIn Get the book on Ama

Denis Rothman 150 Dec 23, 2022
Yet Another Sequence Encoder - Encode sequences to vector of vector in python !

Yase Yet Another Sequence Encoder - encode sequences to vector of vectors in python ! Why Yase ? Yase enable you to encode any sequence which can be r

Pierre PACI 12 Aug 19, 2021
Code for CodeT5: a new code-aware pre-trained encoder-decoder model.

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation This is the official PyTorch implementation

Salesforce 564 Jan 08, 2023
Nested Named Entity Recognition for Chinese Biomedical Text

CBio-NAMER CBioNAMER (Nested nAMed Entity Recognition for Chinese Biomedical Text) is our method used in CBLUE (Chinese Biomedical Language Understand

8 Dec 25, 2022
Entity Disambiguation as text extraction (ACL 2022)

ExtEnD: Extractive Entity Disambiguation This repository contains the code of ExtEnD: Extractive Entity Disambiguation, a novel approach to Entity Dis

Sapienza NLP group 121 Jan 03, 2023
Yuqing Xie 2 Feb 17, 2022
An open source library for deep learning end-to-end dialog systems and chatbots.

DeepPavlov is an open-source conversational AI library built on TensorFlow, Keras and PyTorch. DeepPavlov is designed for development of production re

Neural Networks and Deep Learning lab, MIPT 6k Dec 31, 2022
A simple visual front end to the Maya UE4 RBF plugin delivered with MetaHumans

poseWrangler Overview PoseWrangler is a simple UI to create and edit pose-driven relationships in Maya using the MayaUE4RBF plugin. This plugin is dis

Christopher Evans 105 Dec 18, 2022
Toy example of an applied ML pipeline for me to experiment with MLOps tools.

Toy Machine Learning Pipeline Table of Contents About Getting Started ML task description and evaluation procedure Dataset description Repository stru

Shreya Shankar 190 Dec 21, 2022
Persian Bert For Long-Range Sequences

ParsBigBird: Persian Bert For Long-Range Sequences The Bert and ParsBert algorithms can handle texts with token lengths of up to 512, however, many ta

Sajjad Ayoubi 63 Dec 14, 2022
Grover is a model for Neural Fake News -- both generation and detectio

Grover is a model for Neural Fake News -- both generation and detection. However, it probably can also be used for other generation tasks.

Rowan Zellers 856 Dec 24, 2022
This project consists of data analysis and data visualization (done using python)of all IPL seasons from 2008 to 2019 and answering the most asked questions about the IPL.

IPL-data-analysis This project consists of data analysis and data visualization of all IPL seasons from 2008 to 2019 and answering the most asked ques

Sivateja A T 2 Feb 08, 2022
The code for the Subformer, from the EMNLP 2021 Findings paper: "Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers", by Machel Reid, Edison Marrese-Taylor, and Yutaka Matsuo

Subformer This repository contains the code for the Subformer. To help overcome this we propose the Subformer, allowing us to retain performance while

Machel Reid 10 Dec 27, 2022
Sentence boundary disambiguation tool for Japanese texts (日本語文境界判定器)

Bunkai Bunkai is a sentence boundary (SB) disambiguation tool for Japanese texts. Quick Start $ pip install bunkai $ echo -e '宿を予約しました♪!まだ2ヶ月も先だけど。早すぎ

Megagon Labs 160 Dec 23, 2022
Simple Text-Generator with OpenAI gpt-2 Pytorch Implementation

GPT2-Pytorch with Text-Generator Better Language Models and Their Implications Our model, called GPT-2 (a successor to GPT), was trained simply to pre

Tae-Hwan Jung 775 Jan 08, 2023