Angora is a mutation-based fuzzer. The main goal of Angora is to increase branch coverage by solving path constraints without symbolic execution.

Overview

Angora

License Build Status

Angora is a mutation-based coverage guided fuzzer. The main goal of Angora is to increase branch coverage by solving path constraints without symbolic execution.

Published Work

Arxiv: Angora: Efficient Fuzzing by Principled Search, S&P 2018.

Building Angora

Build Requirements

  • Linux-amd64 (Tested on Ubuntu 16.04/18.04 and Debian Buster)
  • Rust stable (>= 1.31), can be obtained using rustup
  • LLVM 4.0.0 - 7.1.0 : run PREFIX=/path-to-install ./build/install_llvm.sh.

Environment Variables

Append the following entries in the shell configuration file (~/.bashrc, ~/.zshrc).

export PATH=/path-to-clang/bin:$PATH
export LD_LIBRARY_PATH=/path-to-clang/lib:$LD_LIBRARY_PATH

Fuzzer Compilation

The build script will resolve most dependencies and setup the runtime environment.

./build/build.sh

System Configuration

As with AFL, system core dumps must be disabled.

echo core | sudo tee /proc/sys/kernel/core_pattern

Test

Test if Angora is builded successfully.

cd /path-to-angora/tests
./test.sh mini

Running Angora

Build Target Program

Angora compiles the program into two separate binaries, each with their respective instrumentation. Using autoconf programs as an example, here are the steps required.

# Use the instrumenting compilers
CC=/path/to/angora/bin/angora-clang \
CXX=/path/to/angora/bin/angora-clang++ \
LD=/path/to/angora/bin/angora-clang \
PREFIX=/path/to/target/directory \
./configure --disable-shared

# Build with taint tracking support 
USE_TRACK=1 make -j
make install

# Save the compiled target binary into a new directory
# and rename it with .taint postfix, such as uniq.taint

# Build with light instrumentation support
make clean
USE_FAST=1 make -j
make install

# Save the compiled binary into the directory previously
# created and rename it with .fast postfix, such as uniq.fast

If you fail to build by this approach, try wllvm and gllvm described in Build a target program.

Also, we have implemented taint analysis with libdft64 instead of DFSan (Use libdft64 for taint tracking).

Fuzzing

./angora_fuzzer -i input -o output -t path/to/taint/program -- path/to/fast/program [argv]

For more information, please refer to the documentation under the docs/ directory.

Comments
  • Unable to compile lavam programs correctly

    Unable to compile lavam programs correctly

    Hello Angora authors,

    I'm trying to reproduce the lavam evaluation within Magma's infrastructure. However, I think I encounter the following 2 issues. Could you help me to check if I'm doing anything wrong?

    Thank you in advance!

    The 2 issues are as follow:

    1. Angora cannot find any bugs while AFLplusplus can easily discover ones within a few minutes. From the log files I see that Angora is saying Multiple inconsistent warnings. It caused by the fast and track programs has different behaviors. If most constraints are inconsistent, ensure they are compiled with the same environment. Otherwise, please report us.
    2. For who, AFLplusplus can only find <20 bugs after running for 5 hours. For other targets it is finding the numbers of bugs reported in your paper.

    You can find the scripts I use to compile and run the fuzzing campaigns here. Basically, the lavam programs are compiled with fuzzers/aflplusplus/instrument.sh and fuzzers/angora/instrument.sh, which they set up some config and execute targets/lavam/build.sh.
    In targets/lavam/LAVAM you can find the patched source code following your instructions.

    To launch the fuzzing campaigns, cd into tools/captain and run ./run.sh run_lavamrc.
    run_lavamrc is the config file for the campaign. It would create a working directory in ~/lavam-results, build docker containers and start fuzzing with fuzzers/aflplusplus/run.sh and fuzzers/angora/run.sh. The fuzzing results are stored in ~/lavam-results/ar as tarballs.

    Please do let me know if you need any additional information.

    Spencer

    opened by spencerwuwu 1
  • Fix up compiler warnings

    Fix up compiler warnings

    • Correct signedness for c-strings in angora-clang
    • Const-correctness throughout
    • Move #[link] attribute to extern block

    Fixes all warnings emitted by clang version 14.

    opened by bossmc 0
  • Upgrade to GitHub-native Dependabot

    Upgrade to GitHub-native Dependabot

    Dependabot Preview will be shut down on August 3rd, 2021. In order to keep getting Dependabot updates, please merge this PR and migrate to GitHub-native Dependabot before then.

    Dependabot has been fully integrated into GitHub, so you no longer have to install and manage a separate app. This pull request migrates your configuration from Dependabot.com to a config file, using the new syntax. When merged, we'll swap out dependabot-preview (me) for a new dependabot app, and you'll be all set!

    With this change, you'll now use the Dependabot page in GitHub, rather than the Dependabot dashboard, to monitor your version updates, and you'll configure Dependabot through the new config file rather than a UI.

    If you've got any questions or feedback for us, please let us know by creating an issue in the dependabot/dependabot-core repository.

    Learn more about migrating to GitHub-native Dependabot

    Please note that regular @dependabot commands do not work on this pull request.

    dependencies 
    opened by dependabot-preview[bot] 1
  • Angora compile IR

    Angora compile IR

    Would Angora have support to compile from LLVM or BAP derived intermediate representation?

    Trying to analyze binary (pre-compiled) but couldn't figure out how:

     INFO  angora::fuzz_main > CommandOpt { mode: LLVM, id: 0, main: ("/input/azorult2", []), track: ("/input/azorult2", []), tmp_dir: "./output/bar/tmp", out_file: "./output/bar/tmp/cur_input", forksrv_socket_path: "./output/bar/tmp/forksrv_socket", track_path: "./output/bar/tmp/track", is_stdin: true, search_method: Gd, mem_limit: 200, time_limit: 1, is_raw: true, uses_asan: false, ld_library: "$LD_LIBRARY_PATH:/clang+llvm/lib", enable_afl: true, enable_exploitation: true }
    thread 'main' panicked at 'The program is not complied by Angora', fuzzer/src/check_dep.rs:55:9
    
    opened by aug2uag 1
  • Update rand requirement from 0.7 to 0.8

    Update rand requirement from 0.7 to 0.8

    Updates the requirements on rand to permit the latest version.

    Changelog

    Sourced from rand's changelog.

    [0.8.0] - 2020-12-18

    Platform support

    • The minimum supported Rust version is now 1.36 (#1011)
    • getrandom updated to v0.2 (#1041)
    • Remove wasm-bindgen and stdweb feature flags. For details of WASM support, see the getrandom documentation. (#948)
    • ReadRng::next_u32 and next_u64 now use little-Endian conversion instead of native-Endian, affecting results on Big-Endian platforms (#1061)
    • The nightly feature no longer implies the simd_support feature (#1048)
    • Fix simd_support feature to work on current nightlies (#1056)

    Rngs

    • ThreadRng is no longer Copy to enable safe usage within thread-local destructors (#1035)
    • gen_range(a, b) was replaced with gen_range(a..b). gen_range(a..=b) is also supported. Note that a and b can no longer be references or SIMD types. (#744, #1003)
    • Replace AsByteSliceMut with Fill and add support for [bool], [char], [f32], [f64] (#940)
    • Restrict rand::rngs::adapter to std (#1027; see also #928)
    • StdRng: add new std_rng feature flag (enabled by default, but might need to be used if disabling default crate features) (#948)
    • StdRng: Switch from ChaCha20 to ChaCha12 for better performance (#1028)
    • SmallRng: Replace PCG algorithm with xoshiro{128,256}++ (#1038)

    Sequences

    • Add IteratorRandom::choose_stable as an alternative to choose which does not depend on size hints (#1057)
    • Improve accuracy and performance of IteratorRandom::choose (#1059)
    • Implement IntoIterator for IndexVec, replacing the into_iter method (#1007)
    • Add value stability tests for seq module (#933)

    Misc

    • Support PartialEq and Eq for StdRng, SmallRng and StepRng (#979)
    • Added a serde1 feature and added Serialize/Deserialize to UniformInt and WeightedIndex (#974)
    • Drop some unsafe code (#962, #963, #1011)
    • Reduce packaged crate size (#983)
    • Migrate to GitHub Actions from Travis+AppVeyor (#1073)

    Distributions

    • Alphanumeric samples bytes instead of chars (#935)
    • Uniform now supports char, enabling rng.gen_range('A'..='Z') (#1068)
    • Add UniformSampler::sample_single_inclusive (#1003)

    Weighted sampling

    • Implement weighted sampling without replacement (#976, #1013)
    • rand::distributions::alias_method::WeightedIndex was moved to rand_distr::WeightedAliasIndex. The simpler alternative rand::distribution::WeightedIndex remains. (#945)
    • Improve treatment of rounding errors in WeightedIndex::update_weights (#956)
    • WeightedIndex: return error on NaN instead of panic (#1005)

    Documentation

    • Document types supported by random (#994)
    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Automerge options (never/patch/minor, and dev/runtime dependencies)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
    dependencies 
    opened by dependabot-preview[bot] 0
  • showmap: added tool for displaying coverage data

    showmap: added tool for displaying coverage data

    Analogous to afl-showmap. Logs code coverage information to a file (in the same format as afl-showmap).

    This is my first time writing Rust, so I hope that it's okay!

    opened by adrianherrera 0
Releases(1.3.0)
  • 1.3.0(Apr 13, 2022)

    • Support LLVM 11/12
    • Tested in Rust 1.6.*, and Ubuntu 20.04
    • Fix issues
      • getc model
      • https://github.com/AngoraFuzzer/Angora/commit/b31af93bb7401a296af0ddaa7b80eaaed7f73415
      • https://github.com/AngoraFuzzer/Angora/issues/86
    • New PRs
    Source code(tar.gz)
    Source code(zip)
  • 1.2.2(Jul 17, 2019)

    • Implementation of Never-zero counter: The idea is from Marc and Heiko in AFLPlusPlus . https://github.com/vanhauser-thc/AFLplusplus/blob/master/llvm_mode/README.neverzero

    • add inst_ratio : issue #67

    • fix asan compatible: did not instrument function startswith "asan.module"

    Source code(tar.gz)
    Source code(zip)
  • 1.2.1(Jun 14, 2019)

  • 1.2.0(May 23, 2019)

Dataset used in "PlantDoc: A Dataset for Visual Plant Disease Detection" accepted in CODS-COMAD 2020

PlantDoc: A Dataset for Visual Plant Disease Detection This repository contains the Cropped-PlantDoc dataset used for benchmarking classification mode

Pratik Kayal 109 Dec 29, 2022
PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper.

deep-linear-shapes PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper. If you find this code useful i

Romain Loiseau 27 Sep 24, 2022
A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

Eugenio Herrera 175 Dec 29, 2022
Autoencoders pretraining using clustering

Autoencoders pretraining using clustering

IITiS PAN 2 Dec 16, 2021
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Kingdrone 174 Dec 22, 2022
Image based Human Fall Detection

Here I integrated the YOLOv5 object detection algorithm with my own created dataset which consists of human activity images to achieve low cost, high accuracy, and real-time computing requirements

UTTEJ KUMAR 12 Dec 11, 2022
RL algorithm PPO and IRL algorithm AIRL written with Tensorflow.

RL algorithm PPO and IRL algorithm AIRL written with Tensorflow. They have a parallel sampling feature in order to increase computation speed (especially in high-performance computing (HPC)).

Fangjian Li 3 Dec 28, 2021
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥

face.evoLVe: High-Performance Face Recognition Library based on PaddlePaddle & PyTorch Evolve to be more comprehensive, effective and efficient for fa

Zhao Jian 3.1k Jan 02, 2023
Qlib is an AI-oriented quantitative investment platform

Qlib is an AI-oriented quantitative investment platform, which aims to realize the potential, empower the research, and create the value of AI technologies in quantitative investment.

Microsoft 10.1k Dec 30, 2022
K-Means Clustering and Hierarchical Clustering Unsupervised Learning Solution in Python3.

Unsupervised Learning - K-Means Clustering and Hierarchical Clustering - The Heritage Foundation's Economic Freedom Index Analysis 2019 - By David Sal

David Salako 1 Jan 12, 2022
Distributed Evolutionary Algorithms in Python

DEAP DEAP is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data stru

Distributed Evolutionary Algorithms in Python 4.9k Jan 05, 2023
Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)

Beanie - is an asynchronous ODM for MongoDB, based on Motor and Pydantic. It uses an abstraction over Pydantic models and Motor collections to work wi

295 Dec 29, 2022
Multimodal commodity image retrieval 多模态商品图像检索

Multimodal commodity image retrieval 多模态商品图像检索 Not finished yet... introduce explain:The specific description of the project and the product image dat

hongjie 8 Nov 25, 2022
10th place solution for Google Smartphone Decimeter Challenge at kaggle.

Under refactoring 10th place solution for Google Smartphone Decimeter Challenge at kaggle. Google Smartphone Decimeter Challenge Global Navigation Sat

12 Oct 25, 2022
Guided Internet-delivered Cognitive Behavioral Therapy Adherence Forecasting

Guided Internet-delivered Cognitive Behavioral Therapy Adherence Forecasting #Dataset The folder "Dataset" contains the dataset use in this work and m

0 Jan 08, 2022
A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization

MADGRAD Optimization Method A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization pip install madgrad Try it out! A best

Meta Research 774 Dec 31, 2022
Source code for CAST - Crisis Domain Adaptation Using Sequence-to-sequence Transformers (Accepted to ISCRAM 2021, CorePaper).

Source code for CAST: Crisis Domain Adaptation UsingSequence-to-sequenceTransformers (Paper, BibTeX, Accepted to ISCRAM 2021, CorePaper) Quick start D

Congcong Wang 0 Jul 14, 2021
A highly efficient, fast, powerful and light-weight anime downloader and streamer for your favorite anime.

AnimDL - Download & Stream Your Favorite Anime AnimDL is an incredibly powerful tool for downloading and streaming anime. Core features Abuses the dev

KR 759 Jan 08, 2023
Microscopy Image Cytometry Toolkit

Cytokit Cytokit is a collection of tools for quantifying and analyzing properties of individual cells in large fluorescent microscopy datasets with a

Hammer Lab 106 Jan 06, 2023
Implementation of PersonaGPT Dialog Model

PersonaGPT An open-domain conversational agent with many personalities PersonaGPT is an open-domain conversational agent cpable of decoding personaliz

ILLIDAN Lab 42 Jan 01, 2023