Next-gen Rowhammer fuzzer that uses non-uniform, frequency-based patterns.

Overview

Blacksmith Rowhammer Fuzzer

Academic Code Language Badge License: MIT contributions welcome

Preprint: arXiv Paper Funding

This repository provides the code accompanying the paper Blacksmith: Scalable Rowhammering in the Frequency Domain that is to appear in the IEEE conference Security & Privacy (S&P) 2022.

This is the implementation of our Blacksmith Rowhammer fuzzer. This fuzzer crafts novel non-uniform Rowhammer access patterns based on the concepts of frequency, phase, and amplitude. Our evaluation on 40 DIMMs showed that it is able to bypass recent Target Row Refresh (TRR) in-DRAM mitigations effectively and as such can could trigger bit flips on all 40 tested DIMMs.

Getting Started

Following, we quickly describe how to build and run Blacksmith.

Prerequisites

Blacksmith has been tested on Ubuntu 18.04 LTS with Linux kernel 4.15.0. As the CMakeLists we ship with Blacksmith downloads all required dependencies at compile time, there is no need to install any package other than g++ (>= 8) and cmake (>= 3.14).

To facilitate the development, we also provide a Docker container (see Dockerfile) where all required tools and libraries are installed. This container can be configured, for example, as remote host in the CLion IDE, which automatically transfers the files via SSH to the Docker container (i.e., no manual mapping required).

Building Blacksmith

You can build Blacksmith with its supplied CMakeLists.txt in a new build directory:

mkdir build \ 
  && cd build \
  && cmake .. \
  && make -j$(nproc)

Now we can run Blacksmith. For example, we can run Blacksmith in fuzzing mode by passing a random DIMM ID (e.g., --dimm-id 1; only used internally for logging into stdout.log), we limit the fuzzing to 6 hours (--runtime-limit 21600), pass the number of ranks of our current DIMM (--ranks 1) to select the proper bank/rank functions, and tell Blacksmith to do a sweep with the best found pattern after fuzzing finished (--sweeping):

sudo ./blacksmith --dimm-id 1 --runtime-limit 21600 --ranks 1 --sweeping  

While Blacksmith is running, you can use tail -f stdout.log to keep track of the current progress (e.g., patterns, found bit flips). You will see a line like

[!] Flip 0x2030486dcc, row 3090, page offset: 3532, from 8f to 8b, detected after 0 hours 6 minutes 6 seconds.

in case that a bit flip was found. After finishing the Blacksmith run, you can find a fuzz-summary.json that contains the information found in the stdout.log in a machine-processable format. In case you passed the --sweeping flag, you can additionally find a sweep-summary-*.json file that contains the information of the sweeping pass.

Supported Parameters

Blacksmith supports the command-line arguments listed in the following. Except for the parameters --dimm-id and --ranks all other parameters are optional.

    -h, --help
        shows this help message

==== Mandatory Parameters ==================================

    -d, --dimm-id
        internal identifier of the currently inserted DIMM (default: 0)
    -r, --ranks
        number of ranks on the DIMM, used to determine bank/rank/row functions, assumes Intel Coffe Lake CPU (default: None)
    
==== Execution Modes ==============================================

    -f, --fuzzing
        perform a fuzzing run (default program mode)        
    -g, --generate-patterns
        generates N patterns, but does not perform hammering; used by ARM port
    -y, --replay-patterns <csv-list>
        replays patterns given as comma-separated list of pattern IDs

==== Replaying-Specific Configuration =============================

    -j, --load-json
        loads the specified JSON file generated in a previous fuzzer run, required for --replay-patterns
        
==== Fuzzing-Specific Configuration =============================

    -s, --sync
        synchronize with REFRESH while hammering (default: 1)
    -w, --sweeping
        sweep the best pattern over a contig. memory area after fuzzing (default: 0)
    -t, --runtime-limit
        number of seconds to run the fuzzer before sweeping/terminating (default: 120)
    -a, --acts-per-ref
        number of activations in a tREF interval, i.e., 7.8us (default: None)
    -p, --probes
        number of different DRAM locations to try each pattern on (default: NUM_BANKS/4)

The default values of the parameters can be found in the struct ProgramArguments.

Configuration parameters of Blacksmith that we did not need to modify frequently, and thus are not runtime parameters, can be found in the GlobalDefines.hpp file.

Citing our Work

To cite Blacksmith in academic papers, please use the following BibTeX entry:

@inproceedings{jattke2021blacksmith,
  title = {{{BLACKSMITH}}: Rowhammering in the {{Frequency Domain}}},
  shorttitle = {Blacksmith},
  booktitle = {{{IEEE S}}\&{{P}} '22},
  author = {Jattke, Patrick and {van der Veen}, Victor and Frigo, Pietro and Gunter, Stijn and Razavi, Kaveh},
  year = {2021},
  month = nov,
  note = {\url{https://comsec.ethz.ch/wp-content/files/blacksmith_sp22.pdf}}
}
Comments
  • mmap: Invalid argument

    mmap: Invalid argument

    after installing blacksmith successfully and setting up the hugepage to 1 GB I tested the following : sudo ./blacksmith --dimm-id 1 --runtime-limit 120 --ranks 1 and i get this error message: mmap: Invalid argument how can I check if my --dimm-id is valid or not? I think its the argument that creates this issue! my OS : Linux ubuntu 5.11.0-27-generic 64-bit

    opened by AnaMazda 6
  • Blacksmith terminated: Illegal instruction

    Blacksmith terminated: Illegal instruction

    After setting the hugepage size to 1G and build the blacksmith successfully, the program ends with the output "Illegal Instructions" and there is no content in the stdout.log

    opened by HxJi 5
  • Blacksmith not running: mmap: Cannot allocate memory

    Blacksmith not running: mmap: Cannot allocate memory

    Hello,

    I wanted to try your fuzzer on various computers but I always end up with the mmap: Cannot allocate memory error. I thought this would come from my configuration so I tried to increase the number of available huge pages.

    I currently have the following memory configuration regarding huge pages:

    ▶ cat /proc/meminfo|grep Huge                         
    AnonHugePages:         0 kB
    ShmemHugePages:        0 kB
    FileHugePages:         0 kB
    HugePages_Total:     535
    HugePages_Free:      535
    HugePages_Rsvd:        0
    HugePages_Surp:        0
    Hugepagesize:       2048 kB
    Hugetlb:         1095680 kB
    

    On other devices I could even reach more than 1000 free huge pages, which I believe is enough for allocating 1GB of memory with huge pages. However the issue seems to come from somewhere else. I tried the execution on two different devices with ArchLinux, Debian 11 and Ubuntu 18.04 LTS with no success.

    Am I missing something ?

    opened by T-TROUCHKINE 5
  • Does this work on WSL2?

    Does this work on WSL2?

    Hi. I got it working on my gen3 I7 build, but I was wondering if this works on WSL2?

    [email protected]:~/blacksmith-public/build$ sudo ./blacksmith --dimm-id 2 --runtime-limit 21600 --ranks 1 --sweeping Writing into logfile stdout.log [email protected]:~/blacksmith-public/build$ sudo ./blacksmith --dimm-id 1 --runtime-limit 21600 --ranks 1 --sweeping

    [+] General information about this fuzzing run: Start timestamp:: 1637689072 Hostname: PSTEJSKA03-PC Commit SHA: NO_REPOSITORY Run time limit: 21600 (6 hours 0 minutes 0 seconds) [+] Printing run configuration (GlobalDefines.hpp): DRAMA_ROUNDS: 1000 CACHELINE_SIZE: 64 HAMMER_ROUNDS: 1000000 THRESH: 495 NUM_TARGETS: 10 MAX_ROWS: 30 NUM_BANKS: 16 DIMM: 1 CHANNEL: 1 MEM_SIZE: 1073741824 PAGE_SIZE: 4096

    [+] Initializing memory with pseudorandom sequence. [-] Could not find conflicting address sets. Is the number of banks (16) defined correctly?

    opened by MrObvious 4
  • blacksmith doesnt work :  /mnt/huge/buff not found

    blacksmith doesnt work : /mnt/huge/buff not found

    after running blacksmith with default param as mentioned in the description, it stopped immediately with the following erreur in the logfile :

    ` [+] General information about this fuzzing run: Start timestamp:: 1637072011 Hostname: 1cc27a1cdb50 Commit SHA: c8e65b709a83665f9528efdedcf064abdb04859f Run time limit: 120 (0 hours 2 minutes 0 seconds) [+] Printing run configuration (GlobalDefines.hpp): DRAMA_ROUNDS: 1000 CACHELINE_SIZE: 64 HAMMER_ROUNDS: 1000000 THRESH: 495 NUM_TARGETS: 10 MAX_ROWS: 30 NUM_BANKS: 16 DIMM: 1 CHANNEL: 1 MEM_SIZE: 1073741824 PAGE_SIZE: 4096

    [-] Instruction setpriority failed. [+] Could not mount superpage from /mnt/huge/buff. Error: `

    opened by AnaMazda 3
  • Blacksmith run hangs and log shows strange characters

    Blacksmith run hangs and log shows strange characters

    Hi

    I installed blacksmith on a i3-8350k (Coffee-Lake-S) System. Unfortunately the test hangs after a while and the stdout.log shows some strange characters. Does anyone have an idea what could be the reason of this?

    BR JKR stdout_2022_02_04_hangs.log

    opened by JKRde 2
  • Could not find conflicting address sets

    Could not find conflicting address sets

    I'm not able to get past this error even when recompiling with different NUM_BANKS value - I tried 4, 8, 16 and even 32. Always the same output. I'm not sure what other parameters to adjust as the error message doesn't suggest anything else.

    My output is:

    [+] General information about this fuzzing run:
    Start timestamp:: 1637333603
    Hostname: ubuntu
    Commit SHA: c8e65b709a83665f9528efdedcf064abdb04859f
    Run time limit: 21600 (6 hours 0 minutes 0 seconds)
    [+] Printing run configuration (GlobalDefines.hpp):
    DRAMA_ROUNDS: 1000
    CACHELINE_SIZE: 64
    HAMMER_ROUNDS: 1000000
    THRESH: 495
    NUM_TARGETS: 10
    MAX_ROWS: 30
    NUM_BANKS: 32
    DIMM: 1
    CHANNEL: 1
    MEM_SIZE: 1073741824
    PAGE_SIZE: 4096
    
    [+] Initializing memory with pseudorandom sequence.
    [-] Could not find conflicting address sets. Is the number of banks (32) defined correctly?
    

    My kernel is 5.13.0-19-generic on ubuntu 21.10

    Any help is appreciated.

    opened by DominikBucko 2
  • Unable to compile on ARM processor

    Unable to compile on ARM processor

    BlackSmith 0.0.2 has no support for ARM processors:

    [81%] Building CXX object CMakeFiles/bs.dir/src/Fuzzer/AggressorAccessPattern.cpp.o In file included from /home/parallels/blacksmith/include/Memory/DramAnalyzer.hpp:13, from /home/parallels/blacksmith/include/Memory/Memory.hpp:13, from /home/parallels/blacksmith/include/Forges/TraditionalHammerer.hpp:9, from /home/parallels/blacksmith/src/Forges/TraditionalHammerer.cpp:1: /home/parallels/blacksmith/include/Utilities/AsmPrimitives.hpp: In static member function ‘static void TraditionalHammerer::hammer_sync(std::vector<volatile char*>&, int, volatile char*, volatile char*)’: /home/parallels/blacksmith/include/Utilities/AsmPrimitives.hpp:56:3: error: unknown register name ‘%rcx’ in ‘asm’ 56 | asm volatile("rdtscp\n" | ^~~ /home/parallels/blacksmith/include/Utilities/AsmPrimitives.hpp:56:3: error: unknown register name ‘%rcx’ in ‘asm’ 56 | asm volatile("rdtscp\n" | ^~~ /home/parallels/blacksmith/include/Utilities/AsmPrimitives.hpp:56:3: error: unknown register name ‘%rcx’ in ‘asm’ 56 | asm volatile("rdtscp\n" | ^~~ /home/parallels/blacksmith/include/Utilities/AsmPrimitives.hpp:56:3: error: unknown register name ‘%rcx’ in ‘asm’ 56 | asm volatile("rdtscp\n" | ^~~ make[2]: *** [CMakeFiles/bs.dir/build.make:104: CMakeFiles/bs.dir/src/Forges/TraditionalHammerer.cpp.o] Error 1 make[2]: *** Waiting for unfinished jobs.... make[1]: *** [CMakeFiles/Makefile2:387: CMakeFiles/bs.dir/all] Error 2 make: *** [Makefile:136: all] Error 2

    opened by UkeraGan 1
  • fix THRESH comment

    fix THRESH comment

    In my opinion, THRESH is the threshold to distinguish row buffer miss rather than cache miss since in function measure_time() each memory access is followed by a clflushopt to flush it from cache.

    opened by Emoth97 0
  • Fuzzer unable to find patterns on some DIMMs

    Fuzzer unable to find patterns on some DIMMs

    Hi @pjattke ,

    I've used the Blacksmith fuzzer to find patterns that produce a large number of bit flips on some DIMMs. However, on other DIMMs from the same manufacturer and having similar geometry (same number of ranks and banks), I have not managed to produce even a single bit flip even after repeated invocations of the fuzzer (I've roughly run the fuzzer 6 different times, each fuzzing for a duration of 6 hours). I assume it is unlikely for these DIMMs to be completely robust to the Rowhammer exploit and exploring the search space further should produce bit flips? Did you also come across something similar in your experiments? Do you have any practical advice (perhaps alter the THRESH value defined in GlobalDefines.hpp or run the fuzzer on a particular CPU) so I can produce bit flips on these DIMMs too?

    Let me know if you would require further information and thanks again for your time! cc @kaustav-goswami and @dxaen

    opened by hariv 1
  • Some questions regarding the use of time-based side channels in blacksmith

    Some questions regarding the use of time-based side channels in blacksmith

    Hi @pjattke, I have some questions regarding the use of some time-based side channels in the blacksmith code.

    • If I understood correctly, the find_bank_conflicts() method of DramAnalyzer is using a timing side-channel to find addresses that map to each DRAM bank. However, since blacksmith also uses DRAMA to figure out the DRAM functions to map physical addresses to the DRAM geometry (channel, rank, bank, row, etc) what is the need for this side channel?

    • find_bank_conflicts() checks if the time that is taken to access 2 addresses is above a threshold to determine if the 2 addresses belong to the same bank. How did you determine this threshold? My understanding is that the code is looking for Row buffer misses when accessing the 2 addresses (which would take longer implying that they belong to the same bank), but how did you set a value to the threshold? Is the threshold dependent on each individual DIMM or does it depend on the microarchitecture? Also, why is it that the same pair of addresses is checked twice? Is this done to account for jitter?

    • Lastly, the hammer_sync() method of TraditionalHammerer uses a timing side-channel to detect the start of a refresh interval to synchronize hammering within the interval. The timing side-channel uses 2 addresses in the same bank in order to do the sync. Is there any reason as to why the method uses 2 addresses? Can detecting the start of a refresh be detected just by accessing a single address?

    Thanks for your time and wish you a happy new year. cc @kaustav-goswami and @dxaen.

    opened by hariv 3
  • Packaging of Blacksmith in Guix.

    Packaging of Blacksmith in Guix.

    @jgarte and I, with the help of other volunteers, are packaging Blacksmith in Guix. Once completed, Blacksmith can be deployed on any GNU+Linux distribution, with or without Guix, in a reproducible way.

    I am opening this thread so that we can update our progress, including any issues.

    opened by ghost 3
  • Blacksmith on non-Coffee Lake CPUs

    Blacksmith on non-Coffee Lake CPUs

    Did anyone try running blacksmith on CPUs other than Coffee Lake?

    I was able to run it successfully on Kaby Lake, but it didn't work on Comet Lake. It errors out immediately saying it could not find conflicting address sets and asks if the number of banks has been defined correctly (which I checked is correct).

    opened by hariv 14
Releases(0.0.2)
Owner
Computer Security Group @ ETH Zurich
Computer Security Group @ ETH Zurich
[TIP2020] Adaptive Graph Representation Learning for Video Person Re-identification

Introduction This is the PyTorch implementation for Adaptive Graph Representation Learning for Video Person Re-identification. Get started git clone h

WuYiming 41 Dec 12, 2022
CyTran: Cycle-Consistent Transformers for Non-Contrast to Contrast CT Translation

CyTran: Cycle-Consistent Transformers for Non-Contrast to Contrast CT Translation We propose a novel approach to translate unpaired contrast computed

Nicolae Catalin Ristea 13 Jan 02, 2023
Sinkformers: Transformers with Doubly Stochastic Attention

Code for the paper : "Sinkformers: Transformers with Doubly Stochastic Attention" Paper You will find our paper here. Compat This package has been dev

Michael E. Sander 31 Dec 29, 2022
Small little script to scrape, parse and check for active tor nodes. Can be used as proxies.

TorScrape TorScrape is a small but useful script made in python that scrapes a website for active tor nodes, parse the html and then save the nodes in

5 Dec 04, 2022
This is the pytorch implementation of the paper - Axiomatic Attribution for Deep Networks.

Integrated Gradients This is the pytorch implementation of "Axiomatic Attribution for Deep Networks". The original tensorflow version could be found h

Tianhong Dai 150 Dec 23, 2022
A implemetation of the LRCN in mxnet

A implemetation of the LRCN in mxnet ##Abstract LRCN is a combination of CNN and RNN ##Installation Download UCF101 dataset ./avi2jpg.sh to split the

44 Aug 25, 2022
Citation Intent Classification in scientific papers using the Scicite dataset an Pytorch

Citation Intent Classification Table of Contents About the Project Built With Installation Usage Acknowledgments About The Project Citation Intent Cla

Federico Nocentini 4 Mar 04, 2022
DLFlow is a deep learning framework.

DLFlow是一套深度学习pipeline,它结合了Spark的大规模特征处理能力和Tensorflow模型构建能力。利用DLFlow可以快速处理原始特征、训练模型并进行大规模分布式预测,十分适合离线环境下的生产任务。利用DLFlow,用户只需专注于模型开发,而无需关心原始特征处理、pipeline构建、生产部署等工作。

DiDi 152 Oct 27, 2022
Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Networks

CyGNet This repository reproduces the AAAI'21 paper “Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Network

CunchaoZ 89 Jan 03, 2023
Weakly- and Semi-Supervised Panoptic Segmentation (ECCV18)

Weakly- and Semi-Supervised Panoptic Segmentation by Qizhu Li*, Anurag Arnab*, Philip H.S. Torr This repository demonstrates the weakly supervised gro

Qizhu Li 159 Dec 20, 2022
Tensorboard for pytorch (and chainer, mxnet, numpy, ...)

tensorboardX Write TensorBoard events with simple function call. The current release (v2.3) is tested on anaconda3, with PyTorch 1.8.1 / torchvision 0

Tzu-Wei Huang 7.5k Dec 28, 2022
[CVPR2021] Invertible Image Signal Processing

Invertible Image Signal Processing This repository includes official codes for "Invertible Image Signal Processing (CVPR2021)". Figure: Our framework

Yazhou XING 281 Dec 31, 2022
[NeurIPS 2021] Garment4D: Garment Reconstruction from Point Cloud Sequences

Garment4D [PDF] | [OpenReview] | [Project Page] Overview This is the codebase for our NeurIPS 2021 paper Garment4D: Garment Reconstruction from Point

Fangzhou Hong 112 Dec 23, 2022
LONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION

Query Selector Here you can find code and data loaders for the paper https://arxiv.org/pdf/2107.08687v1.pdf . Query Selector is a novel approach to sp

MORAI 62 Dec 17, 2022
A dataset for online Arabic calligraphy

Calliar Calliar is a dataset for Arabic calligraphy. The dataset consists of 2500 json files that contain strokes manually annotated for Arabic callig

ARBML 114 Dec 28, 2022
OBG-FCN - implementation of 'Object Boundary Guided Semantic Segmentation'

OBG-FCN This repository is to reproduce the implementation of 'Object Boundary Guided Semantic Segmentation' in http://arxiv.org/abs/1603.09742 Object

Jiu XU 3 Mar 11, 2019
Discovering Interpretable GAN Controls [NeurIPS 2020]

GANSpace: Discovering Interpretable GAN Controls Figure 1: Sequences of image edits performed using control discovered with our method, applied to thr

Erik Härkönen 1.7k Jan 03, 2023
Code for SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations

The Second Situated Interactive MultiModal Conversations (SIMMC 2.0) Challenge 2021 Welcome to the Second Situated Interactive Multimodal Conversation

Facebook Research 81 Nov 22, 2022
Unofficial Tensorflow Implementation of ConvNeXt from A ConvNet for the 2020s

Tensorflow Implementation of "A ConvNet for the 2020s" This is the unofficial Tensorflow Implementation of ConvNeXt from "A ConvNet for the 2020s" pap

DK 11 Oct 12, 2022
Autolfads-tf2 - A TensorFlow 2.0 implementation of Latent Factor Analysis via Dynamical Systems (LFADS) and AutoLFADS

autolfads-tf2 A TensorFlow 2.0 implementation of LFADS and AutoLFADS. Installati

Systems Neural Engineering Lab 11 Oct 29, 2022