HW3 ― GAN, ACGAN and UDA

Overview

HW3 ― GAN, ACGAN and UDA

In this assignment, you are given datasets of human face and digit images. You will need to implement the models of both GAN and ACGAN for generating human face images, and the model of DANN for classifying digit images from different domains.

For more details, please click this link to view the slides of HW3.

Usage

To start working on this assignment, you should clone this repository into your local machine by using the following command.

git clone https://github.com/dlcv-spring-2019/hw3-
   
    .git

   

Note that you should replace with your own GitHub username.

Dataset

In the starter code of this repository, we have provided a shell script for downloading and extracting the dataset for this assignment. For Linux users, simply use the following command.

bash ./get_dataset.sh

The shell script will automatically download the dataset and store the data in a folder called hw3_data. Note that this command by default only works on Linux. If you are using other operating systems, you should download the dataset from this link and unzip the compressed file manually.

⚠️ IMPORTANT NOTE ⚠️
You should keep a copy of the dataset only in your local machine. DO NOT upload the dataset to this remote repository. If you extract the dataset manually, be sure to put them in a folder called hw3_data under the root directory of your local repository so that it will be included in the default .gitignore file.

Evaluation

To evaluate your UDA models in Problems 3 and 4, you can run the evaluation script provided in the starter code by using the following command.

python3 hw3_eval.py $1 $2
  • $1 is the path to your predicted results (e.g. hw3_data/digits/mnistm/test_pred.csv)
  • $2 is the path to the ground truth (e.g. hw3_data/digits/mnistm/test.csv)

Note that for hw3_eval.py to work, your predicted .csv files should have the same format as the ground truth files we provided in the dataset as shown below.

image_name label
00000.png 4
00001.png 3
00002.png 5
... ...

Submission Rules

Deadline

108/05/08 (Wed.) 01:00 AM

Late Submission Policy

You have a five-day delay quota for the whole semester. Once you have exceeded your quota, the credit of any late submission will be deducted by 30% each day.

Note that while it is possible to continue your work in this repository after the deadline, we will by default grade your last commit before the deadline specified above. If you wish to use your quota or submit an earlier version of your repository, please contact the TAs and let them know which commit to grade. For more information, please check out this post.

Academic Honesty

  • Taking any unfair advantages over other class members (or letting anyone do so) is strictly prohibited. Violating university policy would result in an F grade for this course (NOT negotiable).
  • If you refer to some parts of the public code, you are required to specify the references in your report (e.g. URL to GitHub repositories).
  • You are encouraged to discuss homework assignments with your fellow class members, but you must complete the assignment by yourself. TAs will compare the similarity of everyone’s submission. Any form of cheating or plagiarism will not be tolerated and will also result in an F grade for students with such misconduct.

Submission Format

Aside from your own Python scripts and model files, you should make sure that your submission includes at least the following files in the root directory of this repository:

  1. hw3_ .pdf
    The report of your homework assignment. Refer to the "Grading" section in the slides for what you should include in the report. Note that you should replace with your student ID, NOT your GitHub username.
  2. hw3_p1p2.sh
    The shell script file for running your GAN and ACGAN models. This script takes as input a folder and should output two images named fig1_2.jpg and fig2_2.jpg in the given folder.
  3. hw3_p3.sh
    The shell script file for running your DANN model. This script takes as input a folder containing testing images and a string indicating the target domain, and should output the predicted results in a .csv file.
  4. hw3_p4.sh
    The shell script file for running your improved UDA model. This script takes as input a folder containing testing images and a string indicating the target domain, and should output the predicted results in a .csv file.

We will run your code in the following manner:

bash ./hw3_p1p2.sh $1
bash ./hw3_p3.sh $2 $3 $4
bash ./hw3_p4.sh $2 $3 $4
  • $1 is the folder to which you should output your fig1_2.jpg and fig2_2.jpg.
  • $2 is the directory of testing images in the target domain (e.g. hw3_data/digits/mnistm/test).
  • $3 is a string that indicates the name of the target domain, which will be either mnistm, usps or svhn.
    • Note that you should run the model whose target domain corresponds with $3. For example, when $3 is mnistm, you should make your prediction using your "USPS→MNIST-M" model, NOT your "MNIST-M→SVHN" model.
  • $4 is the path to your output prediction file (e.g. hw3_data/digits/mnistm/test_pred.csv).

🆕 NOTE
For the sake of conformity, please use the python3 command to call your .py files in all your shell scripts. Do not use python or other aliases, otherwise your commands may fail in our autograding scripts.

Packages

Below is a list of packages you are allowed to import in this assignment:

python: 3.5+
tensorflow: 1.13
keras: 2.2+
torch: 1.0
h5py: 2.9.0
numpy: 1.16.2
pandas: 0.24.0
torchvision: 0.2.2
cv2, matplotlib, skimage, Pillow, scipy
The Python Standard Library

Note that using packages with different versions will very likely lead to compatibility issues, so make sure that you install the correct version if one is specified above. E-mail or ask the TAs first if you want to import other packages.

Remarks

  • If your model is larger than GitHub’s maximum capacity (100MB), you can upload your model to another cloud service (e.g. Dropbox). However, your shell script files should be able to download the model automatically. For a tutorial on how to do this using Dropbox, please click this link.
  • DO NOT hard code any path in your file or script, and the execution time of your testing code should not exceed an allowed maximum of 10 minutes.
  • If we fail to run your code due to not following the submission rules, you will receive 0 credit for this assignment.

Q&A

If you have any problems related to HW3, you may

Owner
grassking100
A researcher study in bioinformatics and deep learning. To see other repositories: https://bitbucket.org/grassking100/?sort=-updated_on&privacy=public.
grassking100
Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning"

VANET Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning" Introduction This is the implementation of article VAN

EMDATA-AILAB 23 Dec 26, 2022
Resco: A simple python package that report the effect of deep residual learning

resco Description resco is a simple python package that report the effect of dee

Pierre-Arthur Claudé 1 Jun 28, 2022
Implementation for paper LadderNet: Multi-path networks based on U-Net for medical image segmentation

Implementation for paper LadderNet: Multi-path networks based on U-Net for medical image segmentation This implementation is based on orobix implement

Juntang Zhuang 116 Sep 06, 2022
A set of tools to pre-calibrate and calibrate (multi-focus) plenoptic cameras (e.g., a Raytrix R12) based on the libpleno.

COMPOTE: Calibration Of Multi-focus PlenOpTic camEra. COMPOTE is a set of tools to pre-calibrate and calibrate (multifocus) plenoptic cameras (e.g., a

ComSEE - Computers that SEE 4 May 10, 2022
Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks

flownet2-pytorch Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. Multiple GPU training is supported, a

NVIDIA Corporation 2.8k Dec 27, 2022
Deformable DETR is an efficient and fast-converging end-to-end object detector.

Deformable DETR: Deformable Transformers for End-to-End Object Detection.

2k Jan 05, 2023
Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.

human-pose-estimation-3d-python-cpp RealSenseD435 (RGB) 480x640 + CPU Corei9 45 FPS (Depth is not used) 1. Run 1-1. RealSenseD435 (RGB) 480x640 + CPU

Katsuya Hyodo 8 Oct 03, 2022
Ranking Models in Unlabeled New Environments (iccv21)

Ranking Models in Unlabeled New Environments Prerequisites This code uses the following libraries Python 3.7 NumPy PyTorch 1.7.0 + torchivision 0.8.1

14 Dec 17, 2021
Fast image augmentation library and an easy-to-use wrapper around other libraries

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

11.4k Jan 09, 2023
HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation

HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation Official PyTroch implementation of HPRNet. HPRNet: Hierarchical Point Regre

Nermin Samet 53 Dec 04, 2022
A Pytorch Implementation for Compact Bilinear Pooling.

CompactBilinearPooling-Pytorch A Pytorch Implementation for Compact Bilinear Pooling. Adapted from tensorflow_compact_bilinear_pooling Prerequisites I

169 Dec 23, 2022
Project page for End-to-end Recovery of Human Shape and Pose

End-to-end Recovery of Human Shape and Pose Angjoo Kanazawa, Michael J. Black, David W. Jacobs, Jitendra Malik CVPR 2018 Project Page Requirements Pyt

1.4k Dec 29, 2022
ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing

ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing ProFuzzBench is a benchmark for stateful fuzzing of network protocols. It includes a suite of

155 Jan 08, 2023
a generic C++ library for image analysis

VIGRA Computer Vision Library Copyright 1998-2013 by Ullrich Koethe This file is part of the VIGRA computer vision library. You may use,

Ullrich Koethe 378 Dec 30, 2022
Generalized Decision Transformer for Offline Hindsight Information Matching

Generalized Decision Transformer for Offline Hindsight Information Matching [arxiv] If you use this codebase for your research, please cite the paper:

Hiroki Furuta 35 Dec 12, 2022
Face and Body Tracking for VRM 3D models on the web.

Kalidoface 3D - Face and Full-Body tracking for Vtubing on the web! A sequal to Kalidoface which supports Live2D avatars, Kalidoface 3D is a web app t

Rich 257 Jan 02, 2023
FS2KToolbox FS2K Dataset Towards the translation between Face

FS2KToolbox FS2K Dataset Towards the translation between Face -- Sketch. Download (photo+sketch+annotation): Google-drive, Baidu-disk, pw: FS2K. For

Deng-Ping Fan 5 Jan 03, 2023
StarGAN - Official PyTorch Implementation (CVPR 2018)

StarGAN - Official PyTorch Implementation ***** New: StarGAN v2 is available at https://github.com/clovaai/stargan-v2 ***** This repository provides t

Yunjey Choi 5.1k Jan 04, 2023
An excellent hash algorithm combining classical sponge structure and RNN.

SHA-RNN Recurrent Neural Network with Chaotic System for Hash Functions Anonymous Authors [摘要] 在这次作业中我们提出了一种新的 Hash Function —— SHA-RNN。其以海绵结构为基础,融合了混

Houde Qian 5 May 15, 2022
PyTorch framework for Deep Learning research and development.

Accelerated DL & RL PyTorch framework for Deep Learning research and development. It was developed with a focus on reproducibility, fast experimentati

Catalyst-Team 29 Jul 13, 2022