FairFuzz: AFL extension targeting rare branches

Related tags

Deep Learningafl-rb
Overview

FairFuzz

An AFL extension to increase code coverage by targeting rare branches. FairFuzz has a particular advantage on programs with highly nested structure (packet analyzers, xmllint, programs compiled with laf-inte, etc). AFL is written and maintained by Michal Zalewski [email protected]; FairFuzz extension by Caroline Lemieux [email protected].

Our paper on FairFuzz was published in ASE 2018.

Summary

This is a modified version of AFL which changes (1) the way in which AFL selects input for mutation and (2) the way in which AFL mutates inputs to target rare "branches" in the program under test. AFL keeps track of program coverage an input achieves by counting the number of times the input hits a Basic Block Transition (see AFL technical details for more detail). This basic block transition can be loosely associated with a branch in the control flow graph, thus we call refer to these transitions as branches.

On many benchmarks, this modification achieves faster branch coverage than AFL or AFLFast. The advantage seems particularly strong on programs structured with many nested conditional statements. The graphs below show the number of basic block transitions covered over 24 hours on 9 different benchmarks. Each run was repeated 20 times (one fuzzer only): the dark middle line is the average coverage achieved, and the bands represent 95% confidence intervals.

24 hour runs on benchmarks

Evaluation was conducted on on AFL 2.40b. All runs were done with no AFL dictionary: the emphasis on rare branches appears to yield better automated discovery of special sequences. On the tcpdump and xmllint benchmarks, FairFuzz was able to discover rare sequences over the 20 runs, which the other techniques did not discover. Over these 20 techniques each different technique, the number of runs finding portions of the sequence <!ATTLIST after 24 hours was:

subsequence AFL FidgetyAFL AFLFast.new FairFuzz
<!A 7 15 18 17
<!AT 1 2 3 11
<!ATT 1 0 0 1

More details in article.

Technical Summary

What is a rare branch?

We consider a branch to be rare if it is hit by fewer inputs than the other branches explored so far. We maintain a dynamic threshold for rarity: if a branch is hit by fewer inputs than (or exactly as many as) this threshold, it is rare. Precisely, the threshold is the lowest power of two bounding the number of inputs hitting the least-hit branch. For example, if the branch hit by the fewest inputs is hit by 19 inputs, the threshold will be 32, so any branch hit by ≤32 inputs will be rare. Once the rarest branch is hit by 33 inputs, the threshold will go up to 64.

To keep track of rarity information, after running every input, we increment a hit count for each branch.

How are inputs hitting rare branches selected?

When going through the queue to select inputs to mutate, FairFuzz selects inputs only if -- at the time of selection -- they hit a rare branch. The rare branch an input hits is selected as the target branch. If an input hits multiple rare branches, FairFuzz selects the rarest one (the one hit by the fewest inputs) as the target branch. If an input hits no rare branches, it is skipped (see the -q option below).

How are mutations targeted to a branch?

Once an input is selected from the queue, along with its designated target branch, FairFuzz computes a branch mask for the target branch. For every position in the input, the branch mask designates whether

  1. the byte at that position can be overwritten (overwritable),
  2. the byte at that position can be deleted (deleteable), or
  3. a byte can be inserted at that position (insertable),

while still hitting the target branch. FairFuzz computes an approximation to this branch mask dynamically, in a way similar to AFL's effector map. FairFuzz goes through the input, and at each position flips the byte, deletes the byte, or inserts a byte, then runs the mutated input to check whether it hits the target branch. If the mutated input hits the target branch, the position is marked as overwritable, deleteable, or insertable, respectively.

The branch mask is then used to influence mutations as follows:

  1. In the deterministic stages1, a mutation is only performed at a location if the branch mask designates the position as overwritable (or insertable, for dictionary element insert). For multi-byte modifications, a modification is only performed if all bytes are overwritable. The mask is used in conjunction with the effector map when the effector map is used.
  2. In the havoc stage, positions for mutations are randomly selected within the modifiable positions of the branch mask. For example, if bytes 5-10 and 13-20 in a 20-byte input can be overwritten without failing to hit the target branch, when randomly selecting a byte to randomly mutate, FairFuzz will randomly select a position in [5,6,7,8,9,10,13,...,20] to mutate. After a havoc mutation that deletes (resp. adds) a sequence of bytes in the input, FairFuzz deletes the sequence (resp. adds an "all modifiable" sequence) at the corresponding location in the branch mask.2 The branch mask becomes approximate after this point.

1 The mask is not used in the bitflipping stage since this would interfere with AFL's automatic detection of dictionary elements.

2 The mask is not used to determine where to splice inputs in the splicing stage: during splicing, the first part of the branch mask is kept, but the spliced half of the input is marked as all modifiable.

Usage summary

For basic AFL usage, see the README in docs/ (the one with no .md extension). There are four FairFuzz Rare Branches-specific options:

Running options (may be useful for functionality):

  • -r adds an additional trimming stage before mutating inputs. This trimming is more aggressive than AFL's, trimming the input down only according to the target branch -- the resulting trimmed input may have a different path than the original input, but will still hit the target branch. Recommended for use if there are very large seed files and deterministic mutations are being run.
  • -q num bootstraps the rare branch input selection from the queue with the regular AFL input selection mechanism. If after an entire cycle through the queue, no new branches are discovered, bootstrap according to num as follows:
    • -q 1 go back to regular AFL queueing + no branch mask on mutations until a new branch is found
    • -q 2 go back to regular AFL queueing + no branch mask on mutations + no deterministic mutations until a new branch is found
    • -q 3 go back to regular AFL queueing + no branch mask on mutations for a single queueing cycle

Evaluation options (mostly useful for comparing AFL versions):

  • -b disables the branch mask. (sets every position in the mask as modifiable -- will incur unnecessary slowdown compared to AFL)
  • -s runs a "shadow" mutation run before the branch-mask enabled run. Side effects are disabled in this run. This allows for direct comparison of the effect of the branch mask on new coverage discovered/number of inputs hitting the target branch. See min-branch-fuzzing.log produced in the AFL output directory for details.
Owner
Caroline Lemieux
Caroline Lemieux
Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms

Open-L2O This repository establishes the first comprehensive benchmark efforts of existing learning to optimize (L2O) approaches on a number of proble

VITA 161 Jan 02, 2023
Yolo ros - YOLO-ROS for HUAWEI ATLAS200

YOLO-ROS YOLO-ROS for NVIDIA YOLO-ROS for HUAWEI ATLAS200, please checkout for b

ChrisLiu 5 Oct 18, 2022
Official codebase used to develop Vision Transformer, MLP-Mixer, LiT and more.

Big Vision This codebase is designed for training large-scale vision models on Cloud TPU VMs. It is based on Jax/Flax libraries, and uses tf.data and

Google Research 701 Jan 03, 2023
Attempt at implementation of a simple GAN using Keras

Simple GAN This is my attempt to make a wrapper class for a GAN in keras which can be used to abstract the whole architecture process. Simple GAN Over

Deven96 7 May 23, 2019
CVPR2022 paper "Dense Learning based Semi-Supervised Object Detection"

[CVPR2022] DSL: Dense Learning based Semi-Supervised Object Detection DSL is the first work on Anchor-Free detector for Semi-Supervised Object Detecti

Bhchen 69 Dec 08, 2022
Seg-Torch for Image Segmentation with Torch

Seg-Torch for Image Segmentation with Torch This work was sparked by my personal research on simple segmentation methods based on deep learning. It is

Eren Gölge 37 Dec 12, 2022
PyMatting: A Python Library for Alpha Matting

Given an input image and a hand-drawn trimap (top row), alpha matting estimates the alpha channel of a foreground object which can then be composed onto a different background (bottom row).

PyMatting 1.4k Dec 30, 2022
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" ([email protected])

GP-VAE This repository provides datasets and code for preprocessing, training and testing models for the paper: Diverse Text Generation via Variationa

Wanyu Du 18 Dec 29, 2022
Multistream CNN for Robust Acoustic Modeling

Multistream Convolutional Neural Network (CNN) A multistream CNN is a novel neural network architecture for robust acoustic modeling in speech recogni

ASAPP Research 37 Sep 21, 2022
This repository contains PyTorch models for SpecTr (Spectral Transformer).

SpecTr: Spectral Transformer for Hyperspectral Pathology Image Segmentation This repository contains PyTorch models for SpecTr (Spectral Transformer).

Boxiang Yun 45 Dec 13, 2022
Record radiologists' eye gaze when they are labeling images.

Record radiologists' eye gaze when they are labeling images. Read for installation, usage, and deep learning examples. Why use MicEye Versatile As a l

24 Nov 03, 2022
Groceries ARL: Association Rules (Birliktelik Kuralı)

Groceries_ARL Association Rules (Birliktelik Kuralı) Birliktelik kuralları, mark

Şebnem 5 Feb 08, 2022
Codes for CVPR2021 paper "PWCLO-Net: Deep LiDAR Odometry in 3D Point Clouds Using Hierarchical Embedding Mask Optimization"

PWCLO-Net: Deep LiDAR Odometry in 3D Point Clouds Using Hierarchical Embedding Mask Optimization (CVPR 2021) This is the official implementation of PW

Intelligent Robotics and Machine Vision Lab 42 Dec 18, 2022
SegNet including indices pooling for Semantic Segmentation with tensorflow and keras

SegNet SegNet is a model of semantic segmentation based on Fully Comvolutional Network. This repository contains the implementation of learning and te

Yuta Kamikawa 172 Dec 23, 2022
"Graph Neural Controlled Differential Equations for Traffic Forecasting", AAAI 2022

Graph Neural Controlled Differential Equations for Traffic Forecasting Setup Python environment for STG-NCDE Install python environment $ conda env cr

Jeongwhan Choi 55 Dec 28, 2022
We present a regularized self-labeling approach to improve the generalization and robustness properties of fine-tuning.

Overview This repository provides the implementation for the paper "Improved Regularization and Robustness for Fine-tuning in Neural Networks", which

NEU-StatsML-Research 21 Sep 08, 2022
Toontown House CT Edition

Toontown House: Classic Toontown House Classic source that should just work. ❓ W

Open Source Toontown Servers 5 Jan 09, 2022
Application of the L2HMC algorithm to simulations in lattice QCD.

l2hmc-qcd 📊 Slides Recent talk on Training Topological Samplers for Lattice Gauge Theory from the Machine Learning for High Energy Physics, on and of

Sam Foreman 37 Dec 14, 2022
BERTMap: A BERT-Based Ontology Alignment System

BERTMap: A BERT-based Ontology Alignment System Important Notices The relevant paper was accepted in AAAI-2022. Arxiv version is available at: https:/

KRR 36 Dec 24, 2022