Official code of paper: MovingFashion: a Benchmark for the Video-to-Shop Challenge

Overview

PWC

SEAM Match-RCNN

Official code of MovingFashion: a Benchmark for the Video-to-Shop Challenge paper

CC BY-NC-SA 4.0

Installation

Requirements:

  • Pytorch 1.5.1 or more recent, with cudatoolkit (10.2)
  • torchvision
  • tensorboard
  • cocoapi
  • OpenCV Python
  • tqdm
  • cython
  • CUDA >= 10

Step-by-step installation

# first, make sure that your conda is setup properly with the right environment
# for that, check that `which conda`, `which pip` and `which python` points to the
# right path. From a clean conda env, this is what you need to do

conda create --name seam -y python=3
conda activate seam

pip install cython tqdm opencv-python

# follow PyTorch installation in https://pytorch.org/get-started/locally/
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch

conda install tensorboard

export INSTALL_DIR=$PWD

# install pycocotools
cd $INSTALL_DIR
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
python setup.py build_ext install

# download SEAM
cd $INSTALL_DIR
git clone https://github.com/VIPS4/SEAM-Match-RCNN.git
cd SEAM-Match-RCNN
mkdir data
mkdir ckpt

unset INSTALL_DIR

Dataset

SEAM Match-RCNN has been trained and test on MovingFashion and DeepFashion2 datasets. Follow the instruction to download and extract the datasets.

We suggest to download the datasets inside the folder data.

MovingFashion

MovingFashion dataset is available for academic purposes here.

Deepfashion2

DeepFashion2 dataset is available here. You need fill in the form to get password for unzipping files.

Once the dataset will be extracted, use the reserved DeepFtoCoco.py script to convert the annotations in COCO format, specifying dataset path.

python DeepFtoCoco.py --path <dataset_root>

Training

We provide the scripts to train both Match-RCNN and SEAM Match-RCNN. Check the scripts for all the possible parameters.

Single GPU

#training of Match-RCNN
python train_matchrcnn.py --root_train <path_of_images_folder> --train_annots <json_path> --save_path <save_path> 

#training on movingfashion
python train_movingfashion.py --root <path_of_dataset_root> --train_annots <json_path> --test_annots <json_path> --pretrained_path <path_of_matchrcnn_model>


#training on multi-deepfashion2
python train_multiDF2.py --root <path_of_dataset_root> --train_annots <json_path> --test_annots <json_path> --pretrained_path <path_of_matchrcnn_model>

Multi GPU

We use internally torch.distributed.launch in order to launch multi-gpu training. This utility function from PyTorch spawns as many Python processes as the number of GPUs we want to use, and each Python process will only use a single GPU.

#training of Match-RCNN
python -m torch.distributed.launch --nproc_per_node=<NUM_GPUS> train_matchrcnn.py --root_train <path_of_images_folder> --train_annots <json_path> --save_path <save_path>

#training on movingfashion
python -m torch.distributed.launch --nproc_per_node=<NUM_GPUS> train_movingfashion.py --root <path_of_dataset_root> --train_annots <json_path> --test_annots <json_path> --pretrained_path <path_of_matchrcnn_model> 

#training on multi-deepfashion2
python -m torch.distributed.launch --nproc_per_node=<NUM_GPUS> train_multiDF2.py --root <path_of_dataset_root> --train_annots <json_path> --test_annots <json_path> --pretrained_path <path_of_matchrcnn_model> 

Pre-Trained models

It is possibile to start training using the MatchRCNN pre-trained model.

[MatchRCNN] Pre-trained model on Deepfashion2 is available to download here. This model can be used to start the training at the second phase (training directly SEAM Match-RCNN).

We suggest to download the model inside the folder ckpt.

Evaluation

To evaluate the models of SEAM Match-RCNN please use the following scripts.

#evaluation on movingfashion
python evaluate_movingfashion.py --root_test <path_of_dataset_root> --test_annots <json_path> --ckpt_path <checkpoint_path>


#evaluation on multi-deepfashion2
python evaluate_multiDF2.py --root_test <path_of_dataset_root> --test_annots <json_path> --ckpt_path <checkpoint_path>

Citation

@misc{godi2021movingfashion,
      title={MovingFashion: a Benchmark for the Video-to-Shop Challenge}, 
      author={Marco Godi and Christian Joppi and Geri Skenderi and Marco Cristani},
      year={2021},
      eprint={2110.02627},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC-SA 4.0

Owner
HumaticsLAB
Video and Image Processing for Fashion
HumaticsLAB
StyleGAN2-ADA - Official PyTorch implementation

Abstract: Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmenta

NVIDIA Research Projects 3.2k Dec 30, 2022
WSDM2022 Challenge - Large scale temporal graph link prediction

WSDM 2022 Large-scale Temporal Graph Link Prediction - Baseline and Initial Test Set WSDM Cup Website link Link to this challenge This branch offers A

Deep Graph Library 34 Dec 29, 2022
Beyond imagenet attack (accepted by ICLR 2022) towards crafting adversarial examples for black-box domains.

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains (ICLR'2022) This is the Pytorch code for our paper Beyond ImageNet

Alibaba-AAIG 37 Nov 23, 2022
[ICCV 2021] Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation

MAED: Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation Getting Started Our codes are implemented and tested with pyth

ZiNiU WaN 176 Dec 15, 2022
A new play-and-plug method of controlling an existing generative model with conditioning attributes and their compositions.

Viz-It Data Visualizer Web-Application If I ask you where most of the data wrangler looses their time ? It is Data Overview and EDA. Presenting "Viz-I

NVIDIA Research Projects 66 Jan 01, 2023
AI-generated-characters for Learning and Wellbeing

AI-generated-characters for Learning and Wellbeing Click here for the full project page. This repository contains the source code for the paper AI-gen

MIT Media Lab 214 Jan 01, 2023
A system used to detect whether a person is wearing a medical mask or not.

Mask_Detection_System A system used to detect whether a person is wearing a medical mask or not. To open the program, please follow these steps: Make

Mohamed Emad 0 Nov 17, 2022
Neural Turing Machines (NTM) - PyTorch Implementation

PyTorch Neural Turing Machine (NTM) PyTorch implementation of Neural Turing Machines (NTM). An NTM is a memory augumented neural network (attached to

Guy Zana 519 Dec 21, 2022
FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection

FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection arXi

59 Nov 29, 2022
VIL-100: A New Dataset and A Baseline Model for Video Instance Lane Detection (ICCV 2021)

Preparation Please see dataset/README.md to get more details about our datasets-VIL100 Please see INSTALL.md to install environment and evaluation too

82 Dec 15, 2022
Microscopy Image Cytometry Toolkit

Cytokit Cytokit is a collection of tools for quantifying and analyzing properties of individual cells in large fluorescent microscopy datasets with a

Hammer Lab 106 Jan 06, 2023
Lucid Sonic Dreams syncs GAN-generated visuals to music.

Lucid Sonic Dreams Lucid Sonic Dreams syncs GAN-generated visuals to music. By default, it uses NVLabs StyleGAN2, with pre-trained models lifted from

731 Jan 02, 2023
Global-Local Context Network for Person Search

Global-Local Context Network for Person Search Abstract: Person search aims to jointly localize and identify a query person from natural, uncropped im

Peng Zheng 15 Oct 17, 2022
Code for "ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on", accepted at WACV 2021 Generation of Human Behavior Workshop.

ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on [ Paper ] [ Project Page ] This repository contains the code fo

Andrew Jong 97 Dec 13, 2022
Customer-Transaction-Analysis - This analysis is based on a synthesised transaction dataset containing 3 months worth of transactions for 100 hypothetical customers.

Customer-Transaction-Analysis - This analysis is based on a synthesised transaction dataset containing 3 months worth of transactions for 100 hypothetical customers. It contains purchases, recurring

Ayodeji Yekeen 1 Jan 01, 2022
HTSeq is a Python library to facilitate processing and analysis of data from high-throughput sequencing (HTS) experiments.

HTSeq DEVS: https://github.com/htseq/htseq DOCS: https://htseq.readthedocs.io A Python library to facilitate programmatic analysis of data from high-t

HTSeq 57 Dec 20, 2022
PPO is a very popular Reinforcement Learning algorithm at present.

PPO is a very popular Reinforcement Learning algorithm at present. OpenAI takes PPO as the current baseline algorithm. We use the PPO algorithm to train a policy to give the best action in any situat

Rosefintech 11 Aug 23, 2021
a generic C++ library for image analysis

VIGRA Computer Vision Library Copyright 1998-2013 by Ullrich Koethe This file is part of the VIGRA computer vision library. You may use,

Ullrich Koethe 378 Dec 30, 2022
PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper "A Closer Look at Spatiotemporal Convolutions for Action Recognition"

R2Plus1D-PyTorch PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper "A Closer Look at Spatiotemporal

Irhum Shafkat 342 Dec 16, 2022
A neuroanatomy-based augmented reality experience powered by computer vision. Features 3D visuals of the Atlas Brain Map slices.

Brain Augmented Reality (AR) A neuroanatomy-based augmented reality experience powered by computer vision that features 3D visuals of the Atlas Brain

Yasmeen Brain 10 Oct 06, 2022