Official Pytorch implementation of paper "Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images"

Overview

Reverse_Engineering_GMs

Official Pytorch implementation of paper "Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images".

The paper and supplementary can be found at https://arxiv.org/abs/2106.07873

alt text

Prerequisites

  • PyTorch 1.5.0
  • Numpy 1.14.2
  • Scikit-learn 0.22.2

Getting Started

Datasets

For reverse enginnering:

For deepfake detection:

  • Download the CelebA/LSUN dataset

For image_attribution:

  • Generate 110,000 images for four different GAN models as specified in https://github.com/ningyu1991/GANFingerprints/
  • For real images, use 110,000 of CelebA dataset.
  • For training: we used 100,000 images and remaining 10,000 for testing.

Training

  • Provide the train and test path in respective codes as sepecified below.
  • Provide the model path to resume training
  • Run the code

For reverse engineering, run:

python reverse_eng.py

For deepfake detection, run:

python deepfake_detection.py

For image attribution, run:

python image_attribution.py

Testing using pre-trained models

For reverse engineering, run:

python reverse_eng_test.py

For deepfake detection, run:

python deepfake_detection_test.py

For image attribution, run:

python image_attribution_test.py

If you would like to use our work, please cite:

@misc{asnani2021reverse,
      title={Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images}, 
      author={Vishal Asnani and Xi Yin and Tal Hassner and Xiaoming Liu},
      year={2021},
      eprint={2106.07873},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Comments
  • loaded state dict contains a parameter group that doesn't match the size of optimizer's group

    loaded state dict contains a parameter group that doesn't match the size of optimizer's group

    Hello, I have met a problem (as in the picture below) when executing the file "reverse_eng_test.py" loading the model "11_model_set_1.pickle". Could you please tell me what does the error mean? Because I am not familiar with the architecture of the model and the given pre-trained model "11_model_set_1.pickle". Upon the error is the output of the code ( print(state1['optimizer_1']) ) added by me to see the state of the "state1['optimizer_1']". Thank you!

    image

    opened by hyhchaos 9
  • The .npy files in the rev_eng_updated.py could not be found in the main folders or the .zip or tar.gz file

    The .npy files in the rev_eng_updated.py could not be found in the main folders or the .zip or tar.gz file

    The .npy files in the rev_eng_updated.py could not be found in the main folders or the .zip or tar.gz file. The lost .npy files are in the following codes:

    ground_truth_net_all=torch.from_numpy(np.load("ground_truth_net_131_15dim.npy")) ground_truth_loss_9_all=torch.from_numpy(np.load("ground_truth_loss_131_10dim.npy"))

    ground_truth_net_all_dev=torch.from_numpy(np.load("net_dev_131_dim.npy")) ground_truth_loss_9_all_dev=torch.from_numpy(np.load("ground_truth_loss_131_10dim.npy"))

    ground_truth_net_cluster=torch.from_numpy(np.load("net_cluster_131_dim.npy")) ground_truth_loss_9_cluster=torch.from_numpy(np.load("loss_cluster_131_dim.npy")) #ground_truth_net_all=torch.from_numpy(np.load("random_ground_truth_net_arch_91_15dim.npy")) #ground_truth_loss_all=torch.from_numpy(np.load("random_ground_truth_loss_91_3dim.npy")) #ground_truth_loss_9_all=torch.from_numpy(np.load("random_ground_truth_loss_91_9dim.npy"))

    ground_truth_p=torch.from_numpy(np.load("p_131_.npy"))

    If you could tell me where I can find them, thank you very much. Best wishes!

    opened by zhangtzq 3
  • deepfake_detection.py gives an error ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group

    deepfake_detection.py gives an error ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group

    @vishal3477 I couldn't run **fake_detection_test.py". It gives the following error below. Thanks,

    optimizer.load_state_dict(state1['optimizer_1'])
    

    deepfake_detection_test_error

    opened by ssablak 3
  • What is

    What is "ground_truth_dir" in "reverse_eng_test.py"?

    I have downloaded the data and model. When I run the "reverse_eng_test.py" file, I find that I can not provide the below files. Could you please answer how can I get these files? Thank you very much!

    ground_truth_net_all=torch.from_numpy(np.load(opt.ground_truth_dir+ "ground_truth_net_arch_100_15dim.npy"))
    ground_truth_loss_all=torch.from_numpy(np.load(opt.ground_truth_dir+ "ground_truth_loss_100_3dim.npy"))
    ground_truth_loss_9_all=torch.from_numpy(np.load(opt.ground_truth_dir+ "ground_truth_loss_100_9dim.npy"))
    
    opened by hyhchaos 3
  • torch.rfft is deprecated

    torch.rfft is deprecated

    @vishal3477 Since rfft is deprecated in the newer torch versions. It gives the following error. rfft

    I tried to fix it, but it starts to give an error as rfft2error

    Could you please help me how to define rfft in the newer version of pytorch? Thanks. -Steve

    opened by ssablak 2
  • Getting only 0.1916 Accuracy in Image Attribution

    Getting only 0.1916 Accuracy in Image Attribution

    image

    I'm getting only 0.1916 accuracy in image attribution task, in the test dataset in each of the five classes I've puted 1K generated images from respective GANs and 1K real images from CelebA, and I'm using the pre-trained model.

    I'm using the following code in image_attribution_test.py file:

    from torchvision import datasets, models, transforms #from model import * import os import torch from torch.autograd import Variable from skimage import io from scipy import fftpack import numpy as np from torch import nn import datetime from models import encoder_image_attr from models import fen import torch.nn.functional as F from sklearn.metrics import accuracy_score from sklearn import metrics import argparse

    parser = argparse.ArgumentParser()
    parser.add_argument('--lr', default=0.0001, type=float, help='learning rate')
    parser.add_argument('--data_test',default='Test_Dataset/',help='root directory for testing data')
    parser.add_argument('--ground_truth_dir',default='./',help='directory for ground truth')
    parser.add_argument('--seed', default=1, type=int, help='manual seed')
    parser.add_argument('--batch_size', default=16, type=int, help='batch size')
    parser.add_argument('--savedir', default='runs')
    parser.add_argument('--model_dir', default='./models')
    
    
    
    opt = parser.parse_args()
    print(opt)
    print("Random Seed: ", opt.seed)
    
    device=torch.device("cuda:0")
    torch.backends.deterministic = True
    torch.manual_seed(opt.seed)
    torch.cuda.manual_seed_all(opt.seed)
    sig = "sig"
    
    
    test_path=opt.data_test
    save_dir=opt.savedir
    
    os.makedirs('%s/logs/%s' % (save_dir, sig), exist_ok=True)
    os.makedirs('%s/result_2/%s' % (save_dir, sig), exist_ok=True)
    
    transform_train = transforms.Compose([
    transforms.Resize((128,128)),
    transforms.ToTensor(),
    transforms.Normalize((0.6490, 0.6490, 0.6490), (0.1269, 0.1269, 0.1269))
    ])
    
    
    test_set=datasets.ImageFolder(test_path, transform_train)
    
    
    test_loader = torch.utils.data.DataLoader(test_set,batch_size=opt.batch_size,shuffle =True, num_workers=1)
    
    
    
    model=fen.DnCNN().to(device)
    
    model_params = list(model.parameters())    
    optimizer = torch.optim.Adam(model_params, lr=opt.lr)
    l1=torch.nn.MSELoss().to(device)
    l_c = torch.nn.CrossEntropyLoss().to(device)
    
    model_2=encoder_image_attr.encoder(num_hidden=512).to(device)
    optimizer_2 = torch.optim.Adam(model_2.parameters(), lr=opt.lr)
    state = {
        'state_dict_cnn':model.state_dict(),
        'optimizer_1': optimizer.state_dict(),
        'state_dict_class':model_2.state_dict(),
        'optimizer_2': optimizer_2.state_dict()
        
    }
    
    
    state1 = torch.load("pre_trained_models/image_attribution/celeba/0_model_27_384000.pickle")
    optimizer.load_state_dict(state1['optimizer_1'])
    model.load_state_dict(state1['state_dict_cnn'])
    optimizer_2.load_state_dict(state1['optimizer_2'])
    model_2.load_state_dict(state1['state_dict_class'])
    
    
    
    
    def test(batch, labels):
        model.eval()
        model_2.eval()
        with torch.no_grad():
            y,low_freq_part,max_value ,y_orig,residual, y_trans,residual_gray =model(batch.type(torch.cuda.FloatTensor))
            y_2=torch.unsqueeze(y.clone(),1)
            classes, features=model_2(y_2)
            classes_f=torch.max(classes, dim=1)[0]
            
            n=25
            zero=torch.zeros([y.shape[0],2*n+1,2*n+1], dtype=torch.float32).to(device)  
            zero_1=torch.zeros(residual_gray.shape, dtype=torch.float32).to(device)
            loss1=0.5*l1(low_freq_part,zero).to(device) 
            loss2=-0.001*max_value.to(device)
            loss3 = 0.01*l1(residual_gray,zero_1).to(device)
            loss_c =10*l_c(classes,labels.type(torch.cuda.LongTensor))
            loss5=0.1*l1(y,y_trans).to(device)
            loss=(loss1+loss2+loss3+loss_c+loss5)
        return y, loss.item(), loss1.item(),loss2.item(),loss3.item(),loss_c.item(),loss5.item(),y_orig, features,residual,torch.max(classes, dim=1)[1], classes[:,1]
    
    
    print(len(test_set))
    print(test_set.class_to_idx)
    epochs=2
    
    
    for epoch in range(epochs):
        all_y=[]
        all_y_test=[]
        flag1=0
        count=0
        itr=0
        
        for batch_idx_test, (inputs_test,labels_test) in enumerate(test_loader):
    
            out,loss,loss1,loss2,loss3,loss4,loss5, out_orig,features,residual,pred,scores=test(Variable(torch.FloatTensor(inputs_test)),Variable(torch.LongTensor(labels_test)))
    
            if flag1==0:
                all_y_test=labels_test
                all_y_pred_test=pred.detach()
                all_scores=scores.detach()
                flag1=1
    
            else:
                all_y_pred_test=torch.cat([all_y_pred_test,pred.detach()], dim=0)
                all_y_test=torch.cat([all_y_test,labels_test], dim=0)
                all_scores=torch.cat([all_scores,scores], dim=0)
        fpr1, tpr1, thresholds1 = metrics.roc_curve(all_y_test, np.asarray(all_scores.cpu()), pos_label=1)
        print("testing accuracy is:", accuracy_score(all_y_test,np.asarray(all_y_pred_test.cpu())))
    
    opened by indrakumarmhaski 1
  • Groundtruth Files Issue

    Groundtruth Files Issue

    Hi Vishal, Where can I download the following files? I see three .npy files on the repo but the naming is not matching the exact files between repo and source code.

    I changed the filename in repo below

    FROM ground_truth_loss_func_3dim_file.npy ground_truth_loss_func_8dim_file.npy ground_truth_net_arch_15dim_file.npy groundtruth2

    TO below ground_truth_loss_100_9dim.npy ground_truth_net_arch_100_15dim.npy ground_truth_loss_100_3dim.npy

    groundtruthfiles

    But it didn't run through. It gives the following error

    error

    Thanks, -Steve

    opened by ssablak 1
  • I have a question

    I have a question

    hello, do i need to create all the paths in the reverse_eng.py ? what do i need to save for wach folder?

    parser.add_argument('--lr', default=0.0001, type=float, help='learning rate') parser.add_argument('--data_train',default='mnt/scratch/asnanivi/GAN_data_6/set_1/train',help='root directory for training data') parser.add_argument('--data_test',default='mnt/scratch/asnanivi/GAN_data_6/set_1/test',help='root directory for testing data') parser.add_argument('--ground_truth_dir',default='./',help='directory for ground truth') parser.add_argument('--seed', default=1, type=int, help='manual seed') parser.add_argument('--batch_size', default=16, type=int, help='batch size') parser.add_argument('--savedir', default='/mnt/scratch/asnanivi/runs') parser.add_argument('--model_dir', default='./models') parser.add_argument('--N_given', nargs='+', help='position number of GM from list of GMs used in testing', default=[1,2,3,4,5,6])

    os.chmod('./mnt/scratch',0o777) os.makedirs('.%s/result_3/%s' % (save_dir, sig), exist_ok=True)

    i also had a mistake:Couldn't find any class folder in mnt/scratch/asnanivi/GAN_data_6/set_1/train

    Thanks!

    opened by YZF-Myself 1
  • There is no codes about the cluster prediction about the discrete type network structure parameter in the encoder_rev_eng.py file

    There is no codes about the cluster prediction about the discrete type network structure parameter in the encoder_rev_eng.py file

    I'm sorry to have bothered you. But I didn't find the code for discrete type network structure parameter clustering prediction in the encoder_rev_eng.py file of the original models folder or in the latest Reverse Engineering 2.0 code compressed file. However, your article states the clustering prediction about discrete type network structure parameters, which is important to the result. Looking forward to your reply.

    opened by zhangtzq 5
  • Ground truth file missing

    Ground truth file missing

    Hi, thank you for sharing your code and data. I'm trying to run the reverse_eng_train.py and reverse_eng_test.py scripts, but both are failing due to missing files required in the following lines:

    ground_truth_net_all=torch.from_numpy(np.load(opt.ground_truth_dir+ "ground_truth_net_arch_100_15dim.npy"))
    ground_truth_loss_all=torch.from_numpy(np.load(opt.ground_truth_dir+ "ground_truth_loss_100_3dim.npy"))
    ground_truth_loss_9_all=torch.from_numpy(np.load(opt.ground_truth_dir+ "ground_truth_loss_100_9dim.npy"))
    

    I downloaded the dataset of trained models from the google drive link in the Readme, but couldn't find any information about where we can access those ground-truth data.

    Also, could you verify that the file in the google drive 11_model_set_1.pickle contains the 100 trained models? When I load the file (e.g. data = torch.load('11_model_set_1.pickle), I am getting a checkpoint of a single model (and optimizers). I'd appreciate if you could verify that this is the right file to download the trained models.

    Thank you!

    opened by cocoaaa 1
  • Parameter setting in deepfake detection

    Parameter setting in deepfake detection

    Thank you very much for your contribution.In the deepfake detection module of the paper, parameter lambda1-4 are set as follows which is inconsistent with the code: 参数设置

    loss1=0.05*l1(low_freq_part,zero).to(device) 
    loss2=-0.001*max_value.to(device)
    loss3 = 0.01*l1(residual_gray,zero_1).to(device)
    loss_c =20*l_c(classes,labels.type(torch.cuda.LongTensor))
    loss5=0.1*l1(y,y_trans).to(device)
    

    Can you explain that? Thank you.

    opened by wytcsuch 5
Releases(v2.0)
K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-Commerce (EMNLP Founding 2021)

Introduction K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-Commerce. Installation PyTor

Xu Song 21 Nov 16, 2022
Pytorch implementation of "Geometrically Adaptive Dictionary Attack on Face Recognition" (WACV 2022)

Geometrically Adaptive Dictionary Attack on Face Recognition This is the Pytorch code of our paper "Geometrically Adaptive Dictionary Attack on Face R

6 Nov 21, 2022
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

🌈 ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C

Hyungtae Lim 225 Dec 29, 2022
NEATEST: Evolving Neural Networks Through Augmenting Topologies with Evolution Strategy Training

NEATEST: Evolving Neural Networks Through Augmenting Topologies with Evolution Strategy Training

Göktuğ Karakaşlı 16 Dec 05, 2022
FedGS: A Federated Group Synchronization Framework Implemented by LEAF-MX.

FedGS: Data Heterogeneity-Robust Federated Learning via Group Client Selection in Industrial IoT Preparation For instructions on generating data, plea

Lizonghang 9 Dec 22, 2022
A minimal implementation of Gaussian process regression in PyTorch

pytorch-minimal-gaussian-process In search of truth, simplicity is needed. There exist heavy-weighted libraries, but as you know, we need to go bare b

Sangwoong Yoon 38 Nov 25, 2022
YOLOv5 detection interface - PyQt5 implementation

所有代码已上传,直接clone后,运行yolo_win.py即可开启界面。 2021/9/29:加入置信度选择 界面是在ultralytics的yolov5基础上建立的,界面使用pyqt5实现,内容较简单,娱乐而已。 功能: 模型选择 本地文件选择(视频图片均可) 开关摄像头

487 Dec 27, 2022
AutoVideo: An Automated Video Action Recognition System

AutoVideo is a system for automated video analysis. It is developed based on D3M infrastructure, which describes machine learning with generic pipeline languages. Currently, it focuses on video actio

Data Analytics Lab at Texas A&M University 267 Dec 17, 2022
Semantic Bottleneck Scene Generation

SB-GAN Semantic Bottleneck Scene Generation Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the f

Samaneh Azadi 41 Nov 28, 2022
This repo provides the base code for pytorch-lightning and weight and biases simultaneous integration.

Write your model faster with pytorch-lightning-wadb-code-backbone This repository provides the base code for pytorch-lightning and weight and biases s

9 Mar 29, 2022
Alternatives to Deep Neural Networks for Function Approximations in Finance

Alternatives to Deep Neural Networks for Function Approximations in Finance Code companion repo Overview This is a repository of Python code to go wit

15 Dec 17, 2022
Sign Language Translation with Transformers (COLING'2020, ECCV'20 SLRTP Workshop)

transformer-slt This repository gathers data and code supporting the experiments in the paper Better Sign Language Translation with STMC-Transformer.

Kayo Yin 107 Dec 27, 2022
Code for the paper "Attention Approximates Sparse Distributed Memory"

Attention Approximates Sparse Distributed Memory - Codebase This is all of the code used to run analyses in the paper "Attention Approximates Sparse D

Trenton Bricken 14 Dec 05, 2022
Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.

NuPIC Numenta Platform for Intelligent Computing The Numenta Platform for Intelligent Computing (NuPIC) is a machine intelligence platform that implem

Numenta 6.3k Dec 30, 2022
Hybrid Neural Fusion for Full-frame Video Stabilization

FuSta: Hybrid Neural Fusion for Full-frame Video Stabilization Project Page | Video | Paper | Google Colab Setup Setup environment for [Yu and Ramamoo

Yu-Lun Liu 430 Jan 04, 2023
Learning a mapping from images to psychological similarity spaces with neural networks.

LearningPsychologicalSpaces v0.1: v1.1: v1.2: v1.3: v1.4: v1.5: The code in this repository explores learning a mapping from images to psychological s

Lucas Bechberger 8 Dec 12, 2022
Pytorch implementation for Semantic Segmentation/Scene Parsing on MIT ADE20K dataset

Semantic Segmentation on MIT ADE20K dataset in PyTorch This is a PyTorch implementation of semantic segmentation models on MIT ADE20K scene parsing da

MIT CSAIL Computer Vision 4.5k Jan 08, 2023
Source code for our Paper "Learning in High-Dimensional Feature Spaces Using ANOVA-Based Matrix-Vector Multiplication"

NFFT4ANOVA Source code for our Paper "Learning in High-Dimensional Feature Spaces Using ANOVA-Based Matrix-Vector Multiplication" This package uses th

Theresa Wagner 1 Aug 10, 2022
A pure PyTorch implementation of the loss described in "Online Segment to Segment Neural Transduction"

ssnt-loss ℹ️ This is a WIP project. the implementation is still being tested. A pure PyTorch implementation of the loss described in "Online Segment t

張致強 1 Feb 09, 2022
PyTorch implementation of some learning rate schedulers for deep learning researcher.

pytorch-lr-scheduler PyTorch implementation of some learning rate schedulers for deep learning researcher. Usage WarmupReduceLROnPlateauScheduler Visu

Soohwan Kim 59 Dec 08, 2022