Demonstrates iterative FGSM on Apple's NeuralHash model.

Overview

apple-neuralhash-attack

Demonstrates iterative FGSM on Apple's NeuralHash model.

TL;DR: It is possible to apply noise to CSAM images and make them look like regular images to the NeuralHash model. The noise does degrade the CSAM image (see samples). But this was achieved without tuning learning rate and there are more refined attacks available too.

Example

Here is an example that uses a Grumpy Cat image in place of a CSAM image. The attack adds noise to the Grumpy Cat image and makes the model see it as a Doge image.

As a result, both of these images have the same neural hash of 11d9b097ac960bd2c6c131fa, computed via ONNX Runtime, with the script by AsuharietYgvar/AppleNeuralHash2ONNX.

doge adv_cat

More generally, because the attack optimizes the model output, the adversarial image will generate largely the same hash as the good image, regardless of the seed.

Instructions

Get ONNX model

Obtain the ONNX model from AsuharietYgvar/AppleNeuralHash2ONNX. You should have a path to a model.onnx file.

Convert ONNX model to TF model

Then convert the ONNX model to a Tensorflow model by first installing the onnx_tf library via onnx/onnx-tensorflow. Then run the following:

python3 convert.py -o /path/to/model.onnx

This will save a Tensorflow model to the current directory as model.pb.

Run adversarial attack

Finally, run the adversarial attack with the following:

python3 nnhash_attack.py --seed /path/to/neuralhash_128x96_seed1.dat

Other arguments:

-m           Path to Tensorflow model (defaults to "model.pb")
--good       Path to good image (defaults to "samples/doge.png")
--bad        Path to bad image (defaults to "samples/grumpy_cat.png")
--lr         Learning rate (defaults to 3e-1)
--save_every Save every interval (defaults to 2000)

This will save generated images to samples/iteration_{i}.png.

Note that the hash similarity may decrease initially before increasing again.

Also, for the sample images and with default parameters, the hash was identical after 28000 iterations.

Terminal output:

# Some Tensorflow boilerplate...
Iteration #2000: L2-loss=134688, Hash Similarity=0.2916666666666667
Good Hash: 11d9b097ac960bd2c6c131fa
Bad Hash : 20f1089728150af2ca2de49a
Saving image to samples/iteration2000.png...
Iteration #4000: L2-loss=32605, Hash Similarity=0.41666666666666677
Good Hash: 11d9b097ac960bd2c6c131fa
Bad Hash : 20d9b097ac170ad2cfe170da
Saving image to samples/iteration4000.png...
Iteration #6000: L2-loss=18547, Hash Similarity=0.4166666666666667
Good Hash: 11d9b097ac960bd2c6c131fa
Bad Hash : 20d9b097ac170ad2c7c1f0de
Saving image to samples/iteration6000.png...

Credit

Owner
Lim Swee Kiat
Lim Swee Kiat
Codebase for Attentive Neural Hawkes Process (A-NHP) and Attentive Neural Datalog Through Time (A-NDTT)

Introduction Codebase for the paper Transformer Embeddings of Irregularly Spaced Events and Their Participants. This codebase contains two packages: a

Alan Yang 28 Dec 12, 2022
Optimal Adaptive Allocation using Deep Reinforcement Learning in a Dose-Response Study

Optimal Adaptive Allocation using Deep Reinforcement Learning in a Dose-Response Study Supplementary Materials for Kentaro Matsuura, Junya Honda, Imad

Kentaro Matsuura 4 Nov 01, 2022
Xi Dongbo 78 Nov 29, 2022
Vision Transformer and MLP-Mixer Architectures

Vision Transformer and MLP-Mixer Architectures Update (2.7.2021): Added the "When Vision Transformers Outperform ResNets..." paper, and SAM (Sharpness

Google Research 6.4k Jan 04, 2023
Image Segmentation with U-Net Algorithm on Carvana Dataset using AWS Sagemaker

Image Segmentation with U-Net Algorithm on Carvana Dataset using AWS Sagemaker This is a full project of image segmentation using the model built with

Htin Aung Lu 1 Jan 04, 2022
A PyTorch re-implementation of the paper 'Exploring Simple Siamese Representation Learning'. Reproduced the 67.8% Top1 Acc on ImageNet.

Exploring simple siamese representation learning This is a PyTorch re-implementation of the SimSiam paper on ImageNet dataset. The results match that

Taojiannan Yang 72 Nov 09, 2022
Out of Distribution Detection on Natural Adversarial Examples

OOD-on-NAE Research project on out of distribution detection for the Computer Vision course by Prof. Rob Fergus (CSCI-GA 2271) Paper out on arXiv - ht

Anugya 1 Jun 08, 2022
The official codes for the ICCV2021 presentation "Uniformity in Heterogeneity: Diving Deep into Count Interval Partition for Crowd Counting"

UEPNet (ICCV2021 Poster Presentation) This repository contains codes for the official implementation in PyTorch of UEPNet as described in Uniformity i

Tencent YouTu Research 15 Dec 14, 2022
A package, and script, to perform imaging transcriptomics on a neuroimaging scan.

Imaging Transcriptomics Imaging transcriptomics is a methodology that allows to identify patterns of correlation between gene expression and some prop

Alessio Giacomel 10 Dec 27, 2022
ParaGen is a PyTorch deep learning framework for parallel sequence generation

ParaGen is a PyTorch deep learning framework for parallel sequence generation. Apart from sequence generation, ParaGen also enhances various NLP tasks, including sequence-level classification, extrac

Bytedance Inc. 169 Dec 22, 2022
Instantaneous Motion Generation for Robots and Machines.

Ruckig Instantaneous Motion Generation for Robots and Machines. Ruckig generates trajectories on-the-fly, allowing robots and machines to react instan

Berscheid 374 Dec 23, 2022
A library for performing coverage guided fuzzing of neural networks

TensorFuzz: Coverage Guided Fuzzing for Neural Networks This repository contains a library for performing coverage guided fuzzing of neural networks,

Brain Research 195 Dec 28, 2022
A rule-based log analyzer & filter

Flog 一个根据规则集来处理文本日志的工具。 前言 在日常开发过程中,由于缺乏必要的日志规范,导致很多人乱打一通,一个日志文件夹解压缩后往往有几十万行。 日志泛滥会导致信息密度骤减,给排查问题带来了不小的麻烦。 以前都是用grep之类的工具先挑选出有用的,再逐条进行排查,费时费力。在忍无可忍之后决

上山打老虎 9 Jun 23, 2022
PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Subin An 8 Nov 21, 2022
MAU: A Motion-Aware Unit for Video Prediction and Beyond, NeurIPS2021

MAU (NeurIPS2021) Zheng Chang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Yan Ye, Xinguang Xiang, Wen GAo. Official PyTorch Code for "MAU: A Motion-Aware

ZhengChang 20 Nov 25, 2022
Official implementation of Deep Burst Super-Resolution

Deep-Burst-SR Official implementation of Deep Burst Super-Resolution Publication: Deep Burst Super-Resolution. Goutam Bhat, Martin Danelljan, Luc Van

Goutam Bhat 113 Dec 19, 2022
Official implementation of "Learning Proposals for Practical Energy-Based Regression", 2021.

ebms_proposals Official implementation (PyTorch) of the paper: Learning Proposals for Practical Energy-Based Regression, 2021 [arXiv] [project]. Fredr

Fredrik Gustafsson 10 Oct 22, 2022
A self-supervised learning framework for audio-visual speech

AV-HuBERT (Audio-Visual Hidden Unit BERT) Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction Robust Self-Supervised A

Meta Research 431 Jan 07, 2023
Official implementation of NeurIPS'21: Implicit SVD for Graph Representation Learning

isvd Official implementation of NeurIPS'21: Implicit SVD for Graph Representation Learning If you find this code useful, you may cite us as: @inprocee

Sami Abu-El-Haija 16 Jan 08, 2023
An example of time series augmentation methods with Keras

Time Series Augmentation This is a collection of time series data augmentation methods and an example use using Keras. News 2020/04/16: Repository Cre

九州大学 ヒューマンインタフェース研究室 229 Jan 02, 2023