Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

Overview

FFD Source Code

Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

The proposed network framework with attention mechanism

Project Webpage

See the MSU CVLab website for project details and access to the DFFD dataset.

http://cvlab.cse.msu.edu/project-ffd.html

Notes

This code is provided as example code, and may not reflect a specific combination of hyper-parameters presented in the paper.

Description of contents

  • xception.py: Defines the Xception network with the attention mechanism
  • train*.py: Train the model on the train data
  • test*.py: Evaluate the model on the test data

Acknowledgements

If you use or refer to this source code, please cite the following paper:

@inproceedings{cvpr2020-dang,
  title={On the Detection of Digital Face Manipulation},
  author={Hao Dang, Feng Liu, Joel Stehouwer, Xiaoming Liu, Anil Jain},
  booktitle={In Proceeding of IEEE Computer Vision and Pattern Recognition (CVPR 2020)},
  address={Seattle, WA},
  year={2020}
}
Comments
  • Is it possible to release the script for generating edited images by FaceApp?

    Is it possible to release the script for generating edited images by FaceApp?

    Hi, Thanks for releasing the code and dataset! Part of your dataset is generated by FaceApp (using automated scripts running on android devices). I am wondering if you could also release this android script? I also plan to generate some edited images using FaceApp, and an automated script will be quite helpful!! Thanks!

    opened by zjxgithub 2
  • Question about mask images in dataset

    Question about mask images in dataset

    Thank you for releasing the code and the DFFD dataset!

    I noticed that in the "faceapp" part of the dataset, there is a ground-truth manipulation masks image for each fake image. How are these mask images generated?

    The paper mentioned that the ground-truth manipulation mask were calculated by source images and fake images, but I still did not understand how.

    Thank you for answering my question. :)

    opened by piddnad 2
  • Serveral question about dataset

    Serveral question about dataset

    Thanks for releasing the code and the dataset. I have some questions for the dataset,

    • In align_faces/align_faces.m inside scripts.zip, there is a file called box.txt. But I can't find it anywhere. It seems crucial to align and crop the images.

    image

    • All of the images in dataset are in the resolution of 299x299. I wonder how did you process the images in CelebA. I remember the aligned and cropped image in CelebA are in the resolution of 128x128.
    opened by wheatdog 2
  • attention map and gt mask matching

    attention map and gt mask matching

    Hi, thanks for your work. I have a small question. The attention map size is 19x19, but the gt mask (diff image) is 299x299. Are they matched by downsampling gt mask?

    opened by neverUseThisName 1
  • Are label information leaked in testing process?

    Are label information leaked in testing process?

    Thanks for uploading your code and dataset. After a short view I'm considering your predicting process is like: generating masks with scripts on test data, using test data and their masks to feed into trained model to predict. But I was confused that in your test.py file, you get dataset like this:

    def get_dataset():
      return Dataset('test', BATCH_SIZE, CONFIG['img_size'], CONFIG['map_size'], CONFIG['norms'], SEED)
    

    then you differ masks of real and fake photos by using their labels in dataset.py:

      def __getitem__(self, index):
        im_name = self.images[index]
        img = self.load_image(im_name)
        if self.label_name == 'Real':
          msk = torch.zeros(1,19,19)
        else:
          msk = self.load_mask(im_name.replace('Fake/', 'Mask/'))
        return {'img': img, 'msk': msk, 'lab': self.label, 'im_name': im_name}
    

    Is it fair to distinguish masks by label_name in the testing process? I also wonder how to create Mask/ folder when you predict fake images that donot have corresponding real images?

    If i misunderstand anything please correct me, thanks a lot!

    opened by insomnia1996 0
  • May I know where I can find the imagenet pretrained model?

    May I know where I can find the imagenet pretrained model?

    Hi,

    For using pretrained model: xception-b5690688.pth, may I know where I can find the model specified here: https://github.com/JStehouwer/FFD_CVPR2020/blob/master/xception.py#L243

    Thanks.

    opened by ilovecv 2
  • Error in get_batch in train.py

    Error in get_batch in train.py

    Greetings,

    Many thanks to your wok. I am very interested in your work and I want to try out your model. When I ran the train*.py, I encounter the following issue , here are part of the error messages.

    batch = [next(_.generator, None) for _ in self.datasets]
    

    File "D:\Fake Detector\attention_map_to_detect_manipulation\FFD_CVPR2020\dataset.py", line 91, in self = reduction.pickle.load(from_parent)batch = [next(_.generator, None) for _ in self.datasets]

    File "D:\Fake Detector\attention_map_to_detect_manipulation\FFD_CVPR2020\dataset.py", line 73, in get_batch EOFError: Ran out of input

    and reduction.dump(process_obj, to_child) File "C:\Users\xxx\anaconda3\envs\d2l\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'generator' object

    What I did is just make directory data/train/Real(Fake) and place my images dataset into the corresponding folder and then ran the train.py. However, it seems it can't work. May I ask whether I missed anything. I am running the program in windows system and I don't know that will affect as well.

    opened by bitrookie 1
  • Use pretrained model to classify own data?

    Use pretrained model to classify own data?

    Hi @JStehouwer - thank you so much for the awesome code (v2.1)!

    I am trying to use your pretrained model on my own images in order to try out the classifier.

    Are you able to confirm:

    • Filename and format of pretrained model
    • Whether anything else is needed to perform the above classification

    Thanks again

    opened by jtlz2 4
  • dataset questions

    dataset questions

    1、 Whether the published dataset ( FFHQ、FaceAPP、StarGAN、PGGAN、StyleGAN ) has been randomly selected ? And How to generate starGAN mask, how to determine the specific CelebA picture used ? 2、 I have downloaded the FF++、CelebA and DeepFaceLab dataset, how to randomly select the training set, test set and verification set ? And how to set the random seed ? 3、 Which data sets need align processing, and how, please specify ?

    Thank you for your work, it is very good, I will follow your work, but now the problem of dataset makes my work difficult, I hope to get your help.

    opened by miaoct 2
Releases(v2.1)
Official repository of the paper "GPR1200: A Benchmark for General-PurposeContent-Based Image Retrieval"

GPR1200 Dataset GPR1200: A Benchmark for General-Purpose Content-Based Image Retrieval (ArXiv) Konstantin Schall, Kai Uwe Barthel, Nico Hezel, Klaus J

Visual Computing Group 16 Nov 21, 2022
Implementations of LSTM: A Search Space Odyssey variants and their training results on the PTB dataset.

An LSTM Odyssey Code for training variants of "LSTM: A Search Space Odyssey" on Fomoro. Check out the blog post. Training Install TensorFlow. Clone th

Fomoro AI 95 Apr 13, 2022
RoBERTa Marathi Language model trained from scratch during huggingface 🤗 x flax community week

RoBERTa base model for Marathi Language (मराठी भाषा) Pretrained model on Marathi language using a masked language modeling (MLM) objective. RoBERTa wa

Nipun Sadvilkar 23 Oct 19, 2022
SEJE Pytorch implementation

SEJE is a prototype for the paper Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval with Deep Feature Engineering. Contents Inst

0 Oct 21, 2021
Implementation of the pix2pix model on satellite images

This repo shows how to implement and use the pix2pix GAN model for image to image translation. The model is demonstrated on satellite images, and the

3 May 24, 2022
Split Variational AutoEncoder

Split-VAE Split Variational AutoEncoder Introduction This repository contains and implemementation of a Split Variational AutoEncoder (SVAE). In a SVA

Andrea Asperti 2 Sep 02, 2022
Improving Factual Completeness and Consistency of Image-to-text Radiology Report Generation

Improving Factual Completeness and Consistency of Image-to-text Radiology Report Generation The reference code of Improving Factual Completeness and C

46 Dec 15, 2022
Face Identity Disentanglement via Latent Space Mapping [SIGGRAPH ASIA 2020]

Face Identity Disentanglement via Latent Space Mapping Description Official Implementation of the paper Face Identity Disentanglement via Latent Space

150 Dec 07, 2022
Human Dynamics from Monocular Video with Dynamic Camera Movements

Human Dynamics from Monocular Video with Dynamic Camera Movements Ri Yu, Hwangpil Park and Jehee Lee Seoul National University ACM Transactions on Gra

215 Jan 01, 2023
The reference baseline of final exam for XMU machine learning course

Mini-NICO Baseline The baseline is a reference method for the final exam of machine learning course. Requirements Installation we use /python3.7 /torc

JoaquinChou 3 Dec 29, 2021
Official implementation of Representer Point Selection via Local Jacobian Expansion for Post-hoc Classifier Explanation of Deep Neural Networks and Ensemble Models at NeurIPS 2021

Representer Point Selection via Local Jacobian Expansion for Classifier Explanation of Deep Neural Networks and Ensemble Models This repository is the

Yi(Amy) Sui 2 Dec 01, 2021
Introducing neural networks to predict stock prices

IntroNeuralNetworks in Python: A Template Project IntroNeuralNetworks is a project that introduces neural networks and illustrates an example of how o

Vivek Palaniappan 637 Jan 04, 2023
A study project using the AA-RMVSNet to reconstruct buildings from multiple images

3d-building-reconstruction This is part of a study project using the AA-RMVSNet to reconstruct buildings from multiple images. Introduction It is exci

17 Oct 17, 2022
This is 2nd term discrete maths project done by UCU students that uses backtracking to solve various problems.

Backtracking Project Sponsors This is a project made by UCU students: Olha Liuba - crossword solver implementation Hanna Yershova - sudoku solver impl

Dasha 4 Oct 17, 2021
MAU: A Motion-Aware Unit for Video Prediction and Beyond, NeurIPS2021

MAU (NeurIPS2021) Zheng Chang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Yan Ye, Xinguang Xiang, Wen GAo. Official PyTorch Code for "MAU: A Motion-Aware

ZhengChang 20 Nov 25, 2022
PyTorch for Semantic Segmentation

PyTorch for Semantic Segmentation This repository contains some models for semantic segmentation and the pipeline of training and testing models, impl

Zijun Deng 1.7k Jan 06, 2023
Probabilistic-Monocular-3D-Human-Pose-Estimation-with-Normalizing-Flows

Probabilistic-Monocular-3D-Human-Pose-Estimation-with-Normalizing-Flows This is the official implementation of the ICCV 2021 Paper "Probabilistic Mono

62 Nov 23, 2022
A modular domain adaptation library written in PyTorch.

A modular domain adaptation library written in PyTorch.

Kevin Musgrave 225 Dec 29, 2022
MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)

MetaShift: A Dataset of Datasets for Evaluating Distribution Shifts and Training Conflicts This repo provides the PyTorch source code of our paper: Me

88 Jan 04, 2023
TyXe: Pyro-based BNNs for Pytorch users

TyXe: Pyro-based BNNs for Pytorch users TyXe aims to simplify the process of turning Pytorch neural networks into Bayesian neural networks by leveragi

87 Jan 03, 2023