Graph Coloring - Weighted Vertex Coloring Problem

Overview

Graph Coloring - Weighted Vertex Coloring Problem

MIT License

This project proposes several local searches and an MCTS algorithm for the weighted vertex coloring problem (WVCP).

This problem is an variant of the Graph Coloring Problem. Given a weighted graph G=(V,E), the set of vertices V, the set of edges E and let W be the set of weights w(v) associated to each vertex v in V, the WVCP consists in finding a partition of the vertices V in into k color groups S=(V_1,...,Vk) (1 \leq k \leq |V|) such that no adjacent vertices belong to the same color group and such that the objective function f(S) = \sum_{i=1}^{k}\max_{v\in V_i}{w(v)} is minimized.

This project is coded in C++ for the calculation part and in Python for the data analysis. This work is related to the article :

Grelier, C., Goudet, O., Hao, J.-K., 2022. On Monte Carlo Tree Search for Weighted Vertex Coloring. arXiv:2202.01665 [cs]. https://arxiv.org/abs/2202.01665

Requirements

To compile this project you need :

  • cmake 3.14+
  • Doxygen (optional, to build the documentation)
  • Python (used with 3.8+ for the slurm job generators, data analysis and documentation)

Run the project

Clone the project

git clone https://github.com/Cyril-Grelier/gc_wvcp_mcts

Go to the project directory

cd gc_wvcp_mcts

Load the instances

git submodule init
git submodule update

Build and compile the project :

./scripts/build.sh

Run the project :

cd build
./gc_wvcp --help

Run with slurm

You can find a generator of “to_eval” file to run jobs with slurm or GNU parallel. Set the desired instances, random seed and different parameters in scripts/generator_to_eval_ls.py (for local search) or scripts/generator_to_eval_mcts.py (for mcts) and run the script with python (no particular packages are required) python scripts/generator_to_eval_mcts.py and it will generate output directory and a to_eval file which will contain each command line argument to run with slurm or GNU Parallel.

Create Python environment for data analysis or documentation

Make sure to create the environment for python and activate it before running scripts for data analysis or documentation :

./scripts/build_python.sh
source venv/bin/activate

Data analysis

scripts/generate_table.py takes raw data and convert it to xlsx files (in xlsx_files repertory) with colored best scores, p-value calculation.

Make sure to set all required methods, instances and output names directly in the script before running it.

Results

You can find the raw results in outputs from runs of the code on different instances on the cluster of Nantes : https://ccipl.univ-nantes.fr/ (nazare nodes). These files are in csv format with the header on the first line, followed by each improving solution found during the search (with the complete solution), the last line corresponds to the best solution found during the whole search with the number of iterations, the time,… at the end of the run. The processed data can be found in xlsx_files (files generated by scripts/generate_table.py script). In those files, the results are slightly different comparing to the results in the article as they have been computed on a different CPU but the tendency stay the same.

Documentation

You can generate the documentation by running :

cd docs
make html

The doc main page will be located in : docs/_build/html/index.html. It’s a basic documentation generated from comments in the code.

Acknowledgements

We would like to thank Dr. Wen Sun for sharing the binary code of their AFISA algorithm [1] (the AFISA algorithm have been reimplemented from the article, afisa_original), Dr. Yiyuan Wang for sharing the code of their RedLS algorithm [2] (the RedLS algorithm have been reimplemented from the article, redls) and Pr. Bruno Nogueira for sharing the code of their ILS-TS algorithm [3] (some part of the code have been used and adapted to the implementation of the project, ilsts).

Owner
Cyril
PHD student currently working on metaheuristic (soon) guided by deep learning to solve graph coloring problems.
Cyril
Training RNNs as Fast as CNNs

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

Tao Lei 14 Dec 12, 2022
ZUNIT - Toward Zero-Shot Unsupervised Image-to-Image Translation

ZUNIT Dependencies you can install all the dependencies by pip install -r requirements.txt Datasets Download CUB dataset. Unzip the birds.zip at ./da

Chen Yuanqi 9 Jun 24, 2022
Pipeline for training LSA models using Scikit-Learn.

Latent Semantic Analysis Pipeline for training LSA models using Scikit-Learn. Usage Instead of writing custom code for latent semantic analysis, you j

Dani El-Ayyass 23 Sep 05, 2022
A Chinese to English Neural Model Translation Project

ZH-EN NMT Chinese to English Neural Machine Translation This project is inspired by Stanford's CS224N NMT Project Dataset used in this project: News C

Zhenbang Feng 29 Nov 26, 2022
تولید اسم های رندوم فینگیلیش

karafs کرفس تولید اسم های رندوم فینگیلیش installation ➜ pip install karafs usage دو زبانه ➜ karafs -n 10 توت فرنگی بی ناموس toot farangi-ye bi_namoos

Vaheed NÆINI (9E) 36 Nov 24, 2022
Deep learning for NLP crash course at ABBYY.

Deep NLP Course at ABBYY Deep learning for NLP crash course at ABBYY. Suggested textbook: Neural Network Methods in Natural Language Processing by Yoa

Dan Anastasyev 597 Dec 18, 2022
AEC_DeepModel - Deep learning based acoustic echo cancellation baseline code

AEC_DeepModel - Deep learning based acoustic echo cancellation baseline code

凌逆战 75 Dec 05, 2022
A demo of chinese asr

chinese_asr_demo 一个端到端的中文语音识别模型训练、测试框架 具备数据预处理、模型训练、解码、计算wer等等功能 训练数据 训练数据采用thchs_30,

4 Dec 09, 2021
PyTorch impelementations of BERT-based Spelling Error Correction Models.

PyTorch impelementations of BERT-based Spelling Error Correction Models

Heng Cai 209 Dec 30, 2022
This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers.

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
Text vectorization tool to outperform TFIDF for classification tasks

WHAT: Supervised text vectorization tool Textvec is a text vectorization tool, with the aim to implement all the "classic" text vectorization NLP meth

186 Dec 29, 2022
Open-Source Toolkit for End-to-End Speech Recognition leveraging PyTorch-Lightning and Hydra.

OpenSpeech provides reference implementations of various ASR modeling papers and three languages recipe to perform tasks on automatic speech recogniti

Soohwan Kim 26 Dec 14, 2022
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
leaking paid token generator that was a shit lmao for 100$ haha

Discord-Token-Generator-Leaked leaking paid token generator that was a shit lmao for 100$ he selling it for 100$ wth here the code enjoy don't forget

Keevo 5 Apr 15, 2022
Production First and Production Ready End-to-End Keyword Spotting Toolkit

Production First and Production Ready End-to-End Keyword Spotting Toolkit

223 Jan 02, 2023
A modular Karton Framework service that unpacks common packers like UPX and others using the Qiling Framework.

Unpacker Karton Service A modular Karton Framework service that unpacks common packers like UPX and others using the Qiling Framework. This project is

c3rb3ru5 45 Jan 05, 2023
Training code of Spatial Time Memory Network. Semi-supervised video object segmentation.

Training-code-of-STM This repository fully reproduces Space-Time Memory Networks Performance on Davis17 val set&Weights backbone training stage traini

haochen wang 128 Dec 11, 2022
Search msDS-AllowedToActOnBehalfOfOtherIdentity

前言 现在进行RBCD的攻击手段主要是搜索mS-DS-CreatorSID,如果机器的创建者是我们可控的话,那就可以修改对应机器的msDS-AllowedToActOnBehalfOfOtherIdentity,利用工具SharpAllowedToAct-Modify 那我们索性也试试搜索所有计算机

Jumbo 26 Dec 05, 2022
RecipeReduce: Simplified Recipe Processing for Lazy Programmers

RecipeReduce This repo will help you figure out the amount of ingredients to buy for a certain number of meals with selected recipes. RecipeReduce Get

Qibin Chen 9 Apr 22, 2022
Unet-TTS: Improving Unseen Speaker and Style Transfer in One-shot Voice Cloning

Unet-TTS: Improving Unseen Speaker and Style Transfer in One-shot Voice Cloning English | 中文 ❗ Now we provide inferencing code and pre-training models

164 Jan 02, 2023