AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning

Overview

AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning

AutoPentest-DRL is an automated penetration testing framework based on Deep Reinforcement Learning (DRL) techniques. AutoPentest-DRL can determine the most appropriate attack path for a given logical network, and can also be used to execute a penetration testing attack on a real network via tools such as Nmap and Metasploit. This framework is intended for educational purposes, so that users can study the penetration testing attack mechanisms. AutoPentest-DRL is being developed by the Cyber Range Organization and Design (CROND) NEC-endowed chair at the Japan Advanced Institute of Science and Technology (JAIST) in Ishikawa, Japan.

An overview of AutoPentest-DRL is shown below. The framework receives user input regarding the logical target network, including vulnerability information; alternatively, the framework can use Nmap for network scanning to find actual vulnerabilities in a real target network with known topology. The MulVAL attack-graph generator is then used to determine potential attack trees, which are fed in a simplified form into the DQN Decision Engine. The attack path that is produced as output can be used to study the attack mechanisms on a large number of logical networks. Alternatively, the framework can use the attack path with penetration testing tools, such as Metasploit, making it possible for the user to study how the attack can be carried out on a real target network.

Overview of AutoPentest-DRL

Next we provide brief information on how to setup and use AutoPentest-DRL. For details about its operation, please refer to the User Guide that we also make available.

Prerequisites

Several external tools are required in order to use AutoPentest-DRL; for the basic functionality (DQN training and attacks on logical networks), you'll need:

  • MulVAL: Attack-graph generator used by AutoPentest-DRL to produce possible attack paths for a given network. See the MulVAL page for installation instructions and dependencies. MulVAL should be installed in the directory repos/mulval in the AutoPentest-DRL folder. You also need to configure the /etc/profile file as discussed here. On some systems the tool epstopdf may also need to be installed, for instance by using the command below:
    sudo apt install texlive-font-utils
    

If you plan to use AutoPentest-DRL with real networks, you'll also need:

  • Nmap: Network scanner used by AutoPentest-DRL to determine vulnerabilities in a given real network. The command needed to install nmap on Ubuntu is given below:
    sudo apt install nmap
    
  • Metasploit: Penetration testing tools used by AutoPentest-DRL to actually conduct the attack proposed by the DQN engine on the real target network. To install Metasploit, you can use the installers made available on the Metasploit website. In addition, we use pymetasploit3 as RPC API to communicate with Metasploit, and this tool needs to be installed in the directory Penetration_tools/pymetasploit3 by following its author's instructions.

Setup

AutoPentest-DRL has been developed mainly on the Ubuntu 18.04 LTS operating system; other OSes may work, but have not been tested. In order to set up AutoPentest-DRL, use the releases page to download the latest version, and extract the source code archive into a directory of your choice (for instance, your home directory) on the host on which you intend to use it.

AutoPentest-DRL is implemented in Python, and it requires several packages to run. The file requirements.txt included with the distribution can be used to install the necessary packages via the following command that should be run from the AutoPentest-DRL/ directory:

$ sudo -H pip install -r requirements.txt

Quick Start

AutoPentest-DRL includes a trained DQN model, so you can use it out-of-the-box on a sample logical network topology by running the following command in a terminal from the AutoPentest-DRL/ directory:

$ python3 ./AutoPentest-DRL.py logical_attack

In this logical attack mode no actual attack is conducted, and AutoPentest-DRL will only determine the optimal attack path for the logical network topology that is described in the file MulVal_P/logical_attack_v1.P. By comparing the output path with the visualization of the attack graph that is generated by MulVAL in the file mulval_results/AttackGraph.pdf you can study in detail the attack steps.

For more information about the operation modes of AutoPentest-DRL, including the real attack mode and the training mode, see our User Guide.

References

For a research background regarding AutoPentest-DRL, please refer to the following references:

  • Z. Hu, R. Beuran, Y. Tan, "Automated Penetration Testing Using Deep Reinforcement Learning", IEEE European Symposium on Security and Privacy Workshops (EuroS&PW 2020), Workshop on Cyber Range Applications and Technologies (CACOE'20), Genova, Italy, September 7, 2020, pp. 2-10.
  • Z. Hu, "Automated Penetration Testing Using Deep Reinforcement Learning", Master's thesis, March 2021. https://hdl.handle.net/10119/17095

For a list of contributors to this project, see the file CONTRIBUTORS included in the distribution.

Comments
  • mulval topology template

    mulval topology template

    Hello, I just want to ask if I change the configuration of topology generator then I also have to change the topo_gen_template.P file content or is it a generic template. Thanks.

    opened by shoaib5261 7
  • Evaluating the model

    Evaluating the model

    Thank you for your support, but I have one more question. In the paper you wrote that this model has an accuracy of 0.86. I quite don't understand the method of evaluating, the data used for evaluating and whether that data is in this repo or not.

    Also, can you explain why the model has to train multiple times and the reward increases gradually? I think the simplified matrix holds all the possible paths so the model just need to loop through all paths and print out the desired one. Sorry for my weak understandings.

    Looking forward to your reply. Thank you!

    opened by QuynhNguyen269 5
  • FileNotFound error

    FileNotFound error

    Hi, I'm trying to run the code but it gives me multiple FileNotFound errors. Please help. Thank you!

    The output is:

    ################################################################################ AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning ################################################################################ AutoPentest-DRL: Operation mode: Attack on logical network AutoPentest-DRL: Target topology: MulVAL_P/logical_topology_1.P

    AutoPentest-DRL: Compute attack path for logical network... Generate attack graph using MulVAL... sh: 1: ../repos/mulval/utils/graph_gen.sh: not found Process attack graph into attack matrix... Traceback (most recent call last): File "/home/leekutti/NT522/AutoPentest-DRL/DQN/./confirm_path.py", line 9, in MAP = generateMapClass.sendMap File "./learn/generateMap.py", line 108, in sendMap self.x = self.createMatrix() File "./learn/generateMap.py", line 20, in createMatrix self.csvfile = open('../mulval_result/VERTICES.CSV', 'r') FileNotFoundError: [Errno 2] No such file or directory: '../mulval_result/VERTICES.CSV' Traceback (most recent call last): File "/home/leekutti/NT522/AutoPentest-DRL/DQN/learn/./dqn_learn.py", line 32, in env = gym.make('dqnenv-v0') File "/usr/local/lib/python3.9/dist-packages/gym/envs/registration.py", line 235, in make return registry.make(id, **kwargs) File "/usr/local/lib/python3.9/dist-packages/gym/envs/registration.py", line 129, in make env = spec.make(**kwargs) File "/usr/local/lib/python3.9/dist-packages/gym/envs/registration.py", line 89, in make cls = load(self.entry_point) File "/usr/local/lib/python3.9/dist-packages/gym/envs/registration.py", line 27, in load mod = importlib.import_module(mod_name) File "/usr/lib/python3.9/importlib/init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 680, in _load_unlocked File "", line 790, in exec_module File "", line 228, in _call_with_frames_removed File "/home/leekutti/NT522/AutoPentest-DRL/DQN/learn/env/environment.py", line 12, in class dqnEnvironment(gym.Env): File "/home/leekutti/NT522/AutoPentest-DRL/DQN/learn/env/environment.py", line 14, in dqnEnvironment MAP = np.loadtxt('../processdata/newmap.txt') File "/usr/lib/python3/dist-packages/numpy/lib/npyio.py", line 961, in loadtxt fh = np.lib._datasource.open(fname, 'rt', encoding=encoding) File "/usr/lib/python3/dist-packages/numpy/lib/_datasource.py", line 195, in open return ds.open(path, mode, encoding=encoding, newline=newline) File "/usr/lib/python3/dist-packages/numpy/lib/_datasource.py", line 535, in open raise IOError("%s not found." % path) OSError: ../processdata/newmap.txt not found.

    opened by QuynhNguyen269 3
  • AssertionError: The environment must specify an observation space

    AssertionError: The environment must specify an observation space

    hi everyone, Please help. Thank you!


    The output is:


    Process attack graph into attack matrix... Traceback (most recent call last): File "./dqn_learn.py", line 32, in env = gym.make('dqnenv-v0') File "/usr/local/lib/python3.7/dist-packages/gym/envs/registration.py", line 685, in make env = PassiveEnvChecker(env) File "/usr/local/lib/python3.7/dist-packages/gym/wrappers/env_checker.py", line 26, in init ), "The environment must specify an observation space. https://www.gymlibrary.ml/content/environment_creation/" AssertionError: The environment must specify an observation space. https://www.gymlibrary.ml/content/environment_creation/

    opened by VisaCai 2
  • about article

    about article

    in the article《Automated Penetration Testing Using Deep Reinforcement Learning》 ,we find a index about the Accuracy, i have a Confuse。the accuracy is between the best DQN penetration path and true path. or others?

    opened by lixiaohaao 1
  • target drone

    target drone

    Sorry to bother you frequently,Regarding the construction of a multi-level network, like the network in your experiment, can you elaborate on how to build it?

    Looking forward to your reply LIxiao

    opened by lixiaohaao 1
Releases(1.0)
  • 1.0(Jun 1, 2021)

    First release of AutoPentest-DRL, an automated penetration testing framework based on Deep Reinforcement Learning (DRL) techniques. The framework can determine the most appropriate attack path for a given logical network, and can also be used to execute a penetration testing attack on a real network via tools such as Nmap and Metasploit.

    Source code(tar.gz)
    Source code(zip)
Owner
Cyber Range Organization and Design Chair
Cyber Range Organization and Design (CROND) NEC-endowed chair at JAIST conducts R&D on cybersecurity education and training
Cyber Range Organization and Design Chair
When in Doubt: Improving Classification Performance with Alternating Normalization

When in Doubt: Improving Classification Performance with Alternating Normalization Findings of EMNLP 2021 Menglin Jia, Austin Reiter, Ser-Nam Lim, Yoa

Menglin Jia 13 Nov 06, 2022
The official implementation of the CVPR2021 paper: Decoupled Dynamic Filter Networks

Decoupled Dynamic Filter Networks This repo is the official implementation of CVPR2021 paper: "Decoupled Dynamic Filter Networks". Introduction DDF is

F.S.Fire 180 Dec 30, 2022
A FAIR dataset of TCV experimental results for validating edge/divertor turbulence models.

TCV-X21 validation for divertor turbulence simulations Quick links Intro Welcome to TCV-X21. We're glad you've found us! This repository is designed t

0 Dec 18, 2021
Codes for NAACL 2021 Paper "Unsupervised Multi-hop Question Answering by Question Generation"

Unsupervised-Multi-hop-QA This repository contains code and models for the paper: Unsupervised Multi-hop Question Answering by Question Generation (NA

Liangming Pan 70 Nov 27, 2022
[ICCV 2021] Learning A Single Network for Scale-Arbitrary Super-Resolution

ArbSR Pytorch implementation of "Learning A Single Network for Scale-Arbitrary Super-Resolution", ICCV 2021 [Project] [arXiv] Highlights A plug-in mod

Longguang Wang 229 Dec 30, 2022
[ArXiv 2021] Data-Efficient Instance Generation from Instance Discrimination

InsGen - Data-Efficient Instance Generation from Instance Discrimination Data-Efficient Instance Generation from Instance Discrimination Ceyuan Yang,

GenForce: May Generative Force Be with You 93 Dec 25, 2022
Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

34 Oct 08, 2022
Convert weight file.pth to weight file.blob

CONVERT YOUR MODEL TO IR FORMAT INSTALLATION OpenVino Toolkit Download openvinotoolkit 2021.3 version : Link Instruction of installation : Link Pytorc

Tran Anh Tuan 3 Nov 18, 2021
Implement object segmentation on images using HOG algorithm proposed in CVPR 2005

HOG Algorithm Implementation Description HOG (Histograms of Oriented Gradients) Algorithm is an algorithm aiming to realize object segmentation (edge

Leo Hsieh 2 Mar 12, 2022
Keeper for Ricochet Protocol, implemented with Apache Airflow

Ricochet Keeper This repository contains Apache Airflow DAGs for executing keeper operations for Ricochet Exchange. Usage You will need to run this us

Ricochet Exchange 5 May 24, 2022
Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers

Official TensorFlow implementation of the unsupervised reconstruction model using zero-Shot Learned Adversarial TransformERs (SLATER). (https://arxiv.

ICON Lab 22 Dec 22, 2022
ShuttleNet: Position-aware Fusion of Rally Progress and Player Styles for Stroke Forecasting in Badminton (AAAI 2022)

ShuttleNet: Position-aware Rally Progress and Player Styles Fusion for Stroke Forecasting in Badminton (AAAI 2022) Official code of the paper ShuttleN

Wei-Yao Wang 11 Nov 30, 2022
PyTorch Implementation of Small Lesion Segmentation in Brain MRIs with Subpixel Embedding (ORAL, MICCAIW 2021)

Small Lesion Segmentation in Brain MRIs with Subpixel Embedding PyTorch implementation of Small Lesion Segmentation in Brain MRIs with Subpixel Embedd

22 Oct 21, 2022
Keeping it safe - AI Based COVID-19 Tracker using Deep Learning and facial recognition

Keeping it safe - AI Based COVID-19 Tracker using Deep Learning and facial recognition

Vansh Wassan 15 Jun 17, 2021
Code for the paper "Location-aware Single Image Reflection Removal"

Location-aware Single Image Reflection Removal The shown images are provided by the datasets from IBCLN, ERRNet, SIR2 and the Internet images. The cod

72 Dec 08, 2022
Demonstration of the Model Training as a CI/CD System in Vertex AI

Model Training as a CI/CD System This project demonstrates the machine model training as a CI/CD system in GCP platform. You will see more detailed wo

Chansung Park 19 Dec 28, 2022
Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator

DRL-robot-navigation Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gra

87 Jan 07, 2023
The official implementation of Equalization Loss for Long-Tailed Object Recognition (CVPR 2020) based on Detectron2

Equalization Loss for Long-Tailed Object Recognition Jingru Tan, Changbao Wang, Buyu Li, Quanquan Li, Wanli Ouyang, Changqing Yin, Junjie Yan ⚠️ We re

Jingru Tan 197 Dec 25, 2022
Wandb-predictions - WANDB Predictions With Python

WANDB API CI/CD Below we capture the CI/CD scenarios that we would expect with o

Anish Shah 6 Oct 07, 2022
Official TensorFlow code for the forthcoming paper

~ Efficient-CapsNet ~ Are you tired of over inflated and overused convolutional neural networks? You're right! It's time for CAPSULES :)

Vittorio Mazzia 203 Jan 08, 2023