Code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning".

Overview

0. Introduction

This repository contains the source code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning".

Notes

The network topologies and the trained models used in the paper are not open-sourced. One can create synthetic topologies according to the problem formulation in the paper or modify the code for their own use case.

1. Environment config

AWS instance configurations

  • AMI image: "Deep Learning AMI (Ubuntu 16.04) Version 43.0 - ami-0774e48892bd5f116"
  • for First-stage: g4dn.4xlarge; Threads 16 in gurobi.env
  • for others (ILP, ILP-heur, Second-stage): m5zn.12xlarge; Threads 8 in gurobi.env

Step 0: download the git repo

Step 1: install Linux dependencies

sudo apt-get update
sudo apt-get install build-essential libopenmpi-dev libboost-all-dev

Step 2: install Gurobi

cd 
   
    /
./gurobi.sh
source ~/.bashrc

   

Step 3: setup && start conda environment with python3.7.7

If you use the AWS Deep Learning AMI, conda is preinstalled.

conda create --name 
   
     python=3.7.7
conda activate 
    

    
   

Step 4: install python dependencies in the conda env

cd 
   
    /spinninup
pip install -e .
pip install networkx pulp pybind11 xlrd==1.2.0

   

Step 5: compile C++ program with pybind11

cd 
   
    /source/c_solver
./compile.sh

   

2. Content

  • source
    • c_solver: C++ implementation with Gurobi APIs for ILP solver and network plan evaluator
    • planning: ILP and ILP-heur implementation
    • results: store the provided trained models and solutions, and the training log
    • rl: the implementations of Critic-Actor, RL environment and RL solver
    • simulate: python classes of flow, spof, and traffic matrix
    • topology: python classes of network topology (both optical layer and IP layer)
    • test.py: the main script used to reproduce results
  • spinningup
  • gurobi.sh
    • used to install Gurobi solver

3. Reproduce results (for SIGCOMM'21 artifact evaluation)

Notes

  • Some data points are time-consuming to get (i.e., First-stage for A-0, A-0.25, A-0.5, A-0.75 in Figure 8 and B, C, D, E in Figure 9). We provide pretrained models in /source/results/trained/ / , which will be loaded by default.
  • We recommend distributing different data points and differetnt experiments on multiple AWS instances to run simultaneously.
  • The default epoch_num for Figure 10, 11 and 12 is set to be 1024, to guarantee the convergence. The training process can be terminated manually if convergence is observed.

How to reproduce

  • cd /source
  • Figure 7: python test.py fig_7 , epoch_num can be set smaller than 10 (e.g. 2) to get results faster.
  • Figure 8: python test.py single_dp_fig8 produces one data point at a time (the default adjust_factor is 1).
    • For example, python test.py single_dp_fig8 ILP 0.0 runs ILP algorithm for A-0.
    • Pretrained models will be loaded by default if provided in source/results/trained/. To train from scratch which is NOT RECOMMENDED, run python test.py single_dp_fig8 False
  • Figure 9&13: python test.py single_dp_fig9 produces one data point at a time.
    • For example, python test.py single_dp_fig9 E NeuroPlan runs NeuroPlan (First-stage) for topology E with the pretrained model. To train from scratch which is NOT RECOMMENDED, run python test.py single_dp_fig9 E NeuroPlan False.
    • python test.py second_stage can load the solution from the first stage in and run second-stage with relax_factor= on topo . For example, python test.py second_stage D "results/ /opt_topo/***.txt" 1.5
    • we also provide our results of First-stage in results/trained/ / .txt , which can be used to run second-stage directly. For example, python test.py second_stage C "results/trained/C/C.txt" 1.5
  • Figure 10: python test.py fig_10 .
    • adjust_factor={0.0, 0.5, 1.0}, num_gnn_layer={0, 2, 4}
    • For example, python test.py fig_10 0.5 2 runs NeuroPlan with 2-layer GNNs for topology A-0.5
  • Figure 11: python test.py fig_11 .
    • adjust_factor={0.0, 0.5, 1.0}, mlp_hidden_size={64, 256, 512}
    • For example, python test.py fig_11 0.0 512 runs NeuroPlan with hidden_size=512 for topology A-0
  • Figure 12: python test.py fig_12 .
    • adjust_factor={0.0, 0.5, 1.0}, max_unit_per_step={1, 4, 16}
    • For example, python test.py fig_11 1.0 4 runs NeuroPlan with max_unit_per_step=4 for topology A-1

4. Contact

For any question, please contact hzhu at jhu dot edu.

Owner
NetX Group
Computer Systems Research Group at PKU
NetX Group
Image Captioning using CNN and Transformers

Image-Captioning Keras/Tensorflow Image Captioning application using CNN and Transformer as encoder/decoder. In particulary, the architecture consists

24 Dec 28, 2022
Code for the paper: Hierarchical Reinforcement Learning With Timed Subgoals, published at NeurIPS 2021

Hierarchical reinforcement learning with Timed Subgoals (HiTS) This repository contains code for reproducing experiments from our paper "Hierarchical

Autonomous Learning Group 21 Dec 03, 2022
This application is the basic of automated online-class-joiner(for YıldızEdu) within the right time. Gets the ZOOM link by scheduled date and time.

This application is the basic of automated online-class-joiner(for YıldızEdu) within the right time. Gets the ZOOM link by scheduled date and time.

215355 1 Dec 16, 2021
Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery (ICCV 2021 Oral) Run this model on Replicate Optimization: Global directions: Mapper: Check ou

3.3k Jan 05, 2023
Exploring Relational Context for Multi-Task Dense Prediction [ICCV 2021]

Adaptive Task-Relational Context (ATRC) This repository provides source code for the ICCV 2021 paper Exploring Relational Context for Multi-Task Dense

David Brüggemann 35 Dec 05, 2022
Kinetics-Data-Preprocessing

Kinetics-Data-Preprocessing Kinetics-400 and Kinetics-600 are common video recognition datasets used by popular video understanding projects like Slow

Kaihua Tang 7 Oct 27, 2022
KITTI-360 Annotation Tool is a framework that developed based on python(cherrypy + jinja2 + sqlite3) as the server end and javascript + WebGL as the front end.

KITTI-360 Annotation Tool is a framework that developed based on python(cherrypy + jinja2 + sqlite3) as the server end and javascript + WebGL as the front end.

86 Dec 12, 2022
Estimation of human density in a closed space using deep learning.

Siemens HOLLZOF challenge - Human Density Estimation Add project description here. Installing Dependencies: Install Python3 either system-wide, user-w

3 Aug 08, 2021
Multi-Content GAN for Few-Shot Font Style Transfer at CVPR 2018

MC-GAN in PyTorch This is the implementation of the Multi-Content GAN for Few-Shot Font Style Transfer. The code was written by Samaneh Azadi. If you

Samaneh Azadi 422 Dec 04, 2022
A convolutional recurrent neural network for classifying A/B phases in EEG signals recorded for sleep analysis.

CAP-Classification-CRNN A deep learning model based on Inception modules paired with gated recurrent units (GRU) for the classification of CAP phases

Apurva R. Umredkar 2 Nov 25, 2022
Fully convolutional deep neural network to remove transparent overlays from images

Fully convolutional deep neural network to remove transparent overlays from images

Marc Belmont 1.1k Jan 06, 2023
PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers

CvT: Introducing Convolutions to Vision Transformers Pytorch implementation of CvT: Introducing Convolutions to Vision Transformers Usage: img = torch

Rishikesh (ऋषिकेश) 193 Jan 03, 2023
This respository includes implementations on Manifoldron: Direct Space Partition via Manifold Discovery

Manifoldron: Direct Space Partition via Manifold Discovery This respository includes implementations on Manifoldron: Direct Space Partition via Manifo

dayang_wang 4 Apr 28, 2022
JupyterLite demo deployed to GitHub Pages 🚀

JupyterLite Demo JupyterLite deployed as a static site to GitHub Pages, for demo purposes. ✨ Try it in your browser ✨ ➡️ https://jupyterlite.github.io

JupyterLite 223 Jan 04, 2023
official Pytorch implementation of ICCV 2021 paper FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting.

FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu

77 Dec 27, 2022
Image inpainting using Gaussian Mixture Models

dmfa_inpainting Source code for: MisConv: Convolutional Neural Networks for Missing Data (to be published at WACV 2022) Estimating conditional density

Marcin Przewięźlikowski 8 Oct 09, 2022
Pytorch Implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension)

DiffSinger - PyTorch Implementation PyTorch implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension). Status

Keon Lee 152 Jan 02, 2023
Semi-supervised learning for object detection

Source code for STAC: A Simple Semi-Supervised Learning Framework for Object Detection STAC is a simple yet effective SSL framework for visual object

Google Research 348 Dec 25, 2022
Official implementation for CVPR 2021 paper: Adaptive Class Suppression Loss for Long-Tail Object Detection

Adaptive Class Suppression Loss for Long-Tail Object Detection This repo is the official implementation for CVPR 2021 paper: Adaptive Class Suppressio

CASIA-IVA-Lab 67 Dec 04, 2022
PyTorch implementation of SwAV (Swapping Assignments between Views)

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments This code provides a PyTorch implementation and pretrained models for SwAV

Meta Research 1.7k Jan 04, 2023