DeepFashion2 is a comprehensive fashion dataset.

Overview

DeepFashion2 Dataset

image

DeepFashion2 is a comprehensive fashion dataset. It contains 491K diverse images of 13 popular clothing categories from both commercial shopping stores and consumers. It totally has 801K clothing clothing items, where each item in an image is labeled with scale, occlusion, zoom-in, viewpoint, category, style, bounding box, dense landmarks and per-pixel mask.There are also 873K Commercial-Consumer clothes pairs.
The dataset is split into a training set (391K images), a validation set (34k images), and a test set (67k images).
Examples of DeepFashion2 are shown in Figure 1.

Figure 1: Examples of DeepFashion2.

image From (1) to (4), each row represents clothes images with different variations. At each row, we partition the images into two groups, the left three columns represent clothes from commercial stores, while the right three columns are from customers.In each group, the three images indicate three levels of difficulty with respect to the corresponding variation.Furthermore, at each row, the items in these two groups of images are from the same clothing identity but from two different domains, that is, commercial and customer.The items of the same identity may have different styles such as color and printing.Each item is annotated with landmarks and masks.

Announcements

Download the Data

DeepFashion2 dataset is available in DeepFashion2 dataset. You need fill in the form to get password for unzipping files. Please refer to Data Description below for detailed information about dataset.

Data Organization

Each image in seperate image set has a unique six-digit number such as 000001.jpg. A corresponding annotation file in json format is provided in annotation set such as 000001.json.
Each annotation file is organized as below:

  • source: a string, where 'shop' indicates that the image is from commercial store while 'user' indicates that the image is taken by users.
  • pair_id: a number. Images from the same shop and their corresponding consumer-taken images have the same pair id.
    • item 1
      • category_name: a string which indicates the category of the item.
      • category_id: a number which corresponds to the category name. In category_id, 1 represents short sleeve top, 2 represents long sleeve top, 3 represents short sleeve outwear, 4 represents long sleeve outwear, 5 represents vest, 6 represents sling, 7 represents shorts, 8 represents trousers, 9 represents skirt, 10 represents short sleeve dress, 11 represents long sleeve dress, 12 represents vest dress and 13 represents sling dress.
      • style: a number to distinguish between clothing items from images with the same pair id. Clothing items with different style numbers from images with the same pair id have different styles such as color, printing, and logo. In this way, a clothing item from shop images and a clothing item from user image are positive commercial-consumer pair if they have the same style number greater than 0 and they are from images with the same pair id.(If you are confused with style, please refer to issue#10.)
      • bounding_box: [x1,y1,x2,y2],where x1 and y_1 represent the upper left point coordinate of bounding box, x_2 and y_2 represent the lower right point coordinate of bounding box. (width=x2-x1;height=y2-y1)
      • landmarks: [x1,y1,v1,...,xn,yn,vn], where v represents the visibility: v=2 visible; v=1 occlusion; v=0 not labeled. We have different definitions of landmarks for different categories. The orders of landmark annotations are listed in figure 2.
      • segmentation: [[x1,y1,...xn,yn],[ ]], where [x1,y1,xn,yn] represents a polygon and a single clothing item may contain more than one polygon.
      • scale: a number, where 1 represents small scale, 2 represents modest scale and 3 represents large scale.
      • occlusion: a number, where 1 represents slight occlusion(including no occlusion), 2 represents medium occlusion and 3 represents heavy occlusion.
      • zoom_in: a number, where 1 represents no zoom-in, 2 represents medium zoom-in and 3 represents lagre zoom-in.
      • viewpoint: a number, where 1 represents no wear, 2 represents frontal viewpoint and 3 represents side or back viewpoint.
    • item 2
      ...
    • item n

Please note that 'pair_id' and 'source' are image-level labels. All clothing items in an image share the same 'pair_id' and 'source'.

The definition of landmarks and skeletons of 13 categories are shown below. The numbers in the figure represent the order of landmark annotations of each category in annotation file. A total of 294 landmarks covering 13 categories are defined.

Figure 2: Definitions of landmarks and skeletons.

image

We do not provide data in pairs. In training dataset, images are organized with continuous 'pair_id' including images from consumers and images from shops. (For example: 000001.jpg(pair_id:1; from consumer), 000002.jpg(pair_id:1; from shop),000003.jpg(pair_id:2; from consumer),000004.jpg(pair_id:2; from consumer),000005.jpg(pair_id:2; from consumer), 000006.jpg(pair_id:2; from consumer),000007.jpg(pair_id:2; from shop),000008.jpg(pair_id:2; from shop)...) A clothing item from shop images and a clothing item from consumer image are positive commercial-consumer pair if they have the same style number which is greater than 0 and they are from images with the same pair id, otherwise they are negative pairs. In this way, you can construct training positive pairs and negative pairs in instance-level.

As is shown in the figure below, the first three images are from consumers and the last two images are from shops. These five images have the same 'pair_id'. Clothing items in orange bounding box have the same 'style':1. Clothing items in green bounding box have the same 'style': 2. 'Style' of other clothing items whose bouding boxes are not drawn in the figure is 0 and they can not construct positive commercial-consumer pairs. One positive commercial-consumer pair is the annotated short sleeve top in the first image and the annotated short sleeve top in the last image. Our dataset makes it possbile to construct instance-level pairs in a flexible way.

image

Data Description

Training images: train/image Training annotations: train/annos

Validation images: validation/image Validation annotations: validation/annos

Test images: test/image

Each image in seperate image set has a unique six-digit number such as 000001.jpg. A corresponding annotation file in json format is provided in annotation set such as 000001.json. We provide code to generate coco-type annotations from our dataset in deepfashion2_to_coco.py. Please note that during evaluation, image_id is the digit number of the image name. (For example, the image_id of image 000001.jpg is 1). Json files in json_for_validation and json_for_test are generated based on the above rule using deepfashion2_to_coco.py. In this way, you can generate groundtruth json files for evaluation for clothes detection task and clothes segmentation task, which are not listed in DeepFashion2 Challenge.

In validation set, we provide image-level information in keypoints_val_information.json, retrieval_val_consumer_information.json and retrieval_val_shop_information.json. ( In validation set, the first 10844 images are from consumers and the last 20681 images are from shops.) For clothes detection task and clothes segmentation task, which are not listed in DeepFashion2 Challenge, keypoints_val_information.json can also be used.

We provide keypoints_val_vis.json, keypoints_val_vis_and_occ.json, val_query.json and val_gallery.json for evaluation of validation set. You can get validation score locally using Evaluation Code and above json files. You can also submit your results to evaluation server in our DeepFashion2 Challenge.

In test set, we provide image-level information in keypoints_test_information.json, retrieval_test_consumer_information.json and retrieval_test_shop_information.json.( In test set, the first 20681 images are from consumers and the last 41948 images are from shops.) You need submit your results to evaluation server in our DeepFashion2 Challenge.

Dataset Statistics

Tabel 1 shows the statistics of images and annotations in DeepFashion2. (For statistics of released images and annotations, please refer to DeepFashion2 Challenge).

Table 1: Statistics of DeepFashion2.

Train Validation Test Overall
images 390,884 33,669 67,342 491,895
bboxes 636,624 54,910 109,198 800,732
landmarks 636,624 54,910 109,198 800,732
masks 636,624 54,910 109,198 800,732
pairs 685,584 query: 12,550
gallery: 37183
query: 24,402
gallery: 75,347
873,234

Figure 3 shows the statistics of different variations and the numbers of items of the 13 categories in DeepFashion2.

Figure 3: Statistics of DeepFashion2.

image

Benchmarks

Clothes Detection

This task detects clothes in an image by predicting bounding boxes and category labels to each detected clothing item. The evaluation metrics are the bounding box's average precision ,,.

Table 2: Clothes detection trained with released DeepFashion2 Dataset evaluated on validation set.

AP AP50 AP75
0.638 0.789 0.745

Table 3: Clothes detection on different validation subsets, including scale, occlusion, zoom-in, and viewpoint.

Scale Occlusion Zoom_in Viewpoint Overall
small moderate large slight medium heavy no medium large no wear frontal side or back
AP 0.604 0.700 0.660 0.712 0.654 0.372 0.695 0.629 0.466 0.624 0.681 0.641 0.667
AP50 0.780 0.851 0.768 0.844 0.810 0.531 0.848 0.755 0.563 0.713 0.832 0.796 0.814
AP75 0.717 0.809 0.744 0.812 0.768 0.433 0.806 0.718 0.525 0.688 0.791 0.744 0.773

Landmark and Pose Estimation

This task aims to predict landmarks for each detected clothing item in an each image.Similarly, we employ the evaluation metrics used by COCOfor human pose estimation by calculating the average precision for keypoints ,, where OKS indicates the object landmark similarity.

Table 4: Landmark estimation trained with released DeepFashion2 Dataset evaluated on validation set.

AP AP50 AP75
vis 0.605 0.790 0.684
vis && hide 0.529 0.775 0.596

Table 5: Landmark Estimation on different validation subsets, including scale, occlusion, zoom-in, and viewpoint.Results of evaluation on visible landmarks only and evaluation on both visible and occlusion landmarks are separately shown in each row

Scale Occlusion Zoom_in Viewpoint Overall
small moderate large slight medium heavy no medium large no wear frontal side or back
AP 0.587
0.497
0.687
0.607
0.599
0.555
0.669
0.643
0.631
0.530
0.398
0.248
0.688
0.616
0.559
0.489
0.375
0.319
0.527
0.510
0.677
0.596
0.536
0.456
0.641
0.563
AP50 0.780
0.764
0.854
0.839
0.782
0.774
0.851
0.847
0.813
0.799
0.534
0.479
0.855
0.848
0.757
0.744
0.571
0.549
0.724
0.716
0.846
0.832
0.748
0.727
0.820
0.805
AP75 0.671
0.551
0.779
0.703
0.678
0.625
0.760
0.739
0.718
0.600
0.440
0.236
0.786
0.714
0.633
0.537
0.390
0.307
0.571
0.550
0.771
0.684
0.610
0.506
0.728
0.641

Figure 4 shows the results of landmark and pose estimation.

Figure 4: Results of landmark and pose estimation.

image

Clothes Segmentation

This task assigns a category label (including background label) to each pixel in an item.The evaluation metrics is the average precision including ,, computed over masks.

Table 6: Clothes segmentation trained with released DeepFashion2 Dataset evaluated on validation set.

AP AP50 AP75
0.640 0.797 0.754

Table 7: Clothes Segmentation on different validation subsets, including scale, occlusion, zoom-in, and viewpoint.

Scale Occlusion Zoom_in Viewpoint Overall
small moderate large slight medium heavy no medium large no wear frontal side or back
AP 0.634 0.703 0.666 0.720 0.656 0.381 0.701 0.637 0.478 0.664 0.689 0.635 0.674
AP50 0.811 0.865 0.798 0.863 0.824 0.543 0.861 0.791 0.591 0.757 0.849 0.811 0.834
AP75 0.752 0.826 0.773 0.836 0.780 0.444 0.823 0.751 0.559 0.737 0.810 0.755 0.793

Figure 5 shows the results of clothes segmentation.

Figure 5: Results of clothes segmentation.

image

Consumer-to-Shop Clothes Retrieval

Given a detected item from a consumer-taken photo, this task aims to search the commercial images in the gallery for the items that are corresponding to this detected item. In this task, top-k retrieval accuracy is employed as the evaluation metric. We emphasize the retrieval performance while still consider the influence of detector. If a clothing item fails to be detected, this query item is counted as missed.

Table 8: Consumer-to-Shop Clothes Retrieval trained with released DeepFashion2 Dataset using detected box evaluated on validation set.

Top-1 Top-5 Top-10 Top-15 Top-20
class 0.079 0.198 0.273 0.329 0.366
keypoints 0.182 0.326 0.416 0.469 0.510
segmentation 0.135 0.271 0.350 0.407 0.447
class+keys 0.192 0.345 0.435 0.488 0.524
class+seg 0.152 0.295 0.379 0.435 0.477

Table 9: Consumer-to-Shop Clothes Retrieval on different subsets of some validation consumer-taken images. Each query item in these images has over 5 identical clothing items in validation commercial images. Results of evaluation on ground truth box and detected box are separately shown in each row. The evaluation metrics are top-20 accuracy.

Scale Occlusion Zoom_in Viewpoint Overall
small moderate large slight medium heavy no medium large no wear frontal side or back top-1 top-10 top-20
class 0.520
0.485
0.630
0.537
0.540
0.502
0.572
0.527
0.563
0.508
0.558
0.383
0.618
0.553
0.547
0.496
0.444
0.405
0.546
0.499
0.584
0.523
0.533
0.487
0.102
0.091
0.361
0.312
0.470
0.415
pose 0.721
0.637
0.778
0.702
0.735
0.691
0.756
0.710
0.737
0.670
0.728
0.580
0.775
0.710
0.751
0.701
0.621
0.560
0.731
0.690
0.763
0.700
0.711
0.645
0.264
0.243
0.562
0.497
0.654
0.588
mask 0.624
0.552
0.714
0.657
0.646
0.608
0.675
0.639
0.651
0.593
0.632
0.555
0.711
0.654
0.655
0.613
0.526
0.495
0.644
0.615
0.682
0.630
0.637
0.565
0.193
0.186
0.474
0.422
0.571
0.520
pose+class 0.752
0.691
0.786
0.730
0.733
0.705
0.754
0.725
0.750
0.706
0.728
0.605
0.789
0.746
0.750
0.709
0.620
0.582
0.726
0.699
0.771
0.723
0.719
0.684
0.268
0.244
0.574
0.522
0.665
0.617
mask+class 0.656
0.610
0.728
0.666
0.687
0.649
0.714
0.676
0.676
0.623
0.654
0.549
0.725
0.674
0.702
0.655
0.565
0.536
0.684
0.648
0.712
0.661
0.658
0.604
0.212
0.208
0.496
0.451
0.595
0.542

Figure 6 shows queries with top-5 retrieved clothing items. The first and the seventh column are the images from the customers with bounding boxes predicted by detection module, and the second to the sixth columns and the eighth to the twelfth columns show the retrieval results from the store.

Figure 6: Results of clothes retrieval.

image

Citation

If you use the DeepFashion2 dataset in your work, please cite it as:

@article{DeepFashion2,
  author = {Yuying Ge and Ruimao Zhang and Lingyun Wu and Xiaogang Wang and Xiaoou Tang and Ping Luo},
  title={A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images},
  journal={CVPR},
  year={2019}
}
Owner
switchnorm
Switchable Normalizations
switchnorm
MASS (Mueen's Algorithm for Similarity Search) - a python 2 and 3 compatible library used for searching time series sub-sequences under z-normalized Euclidean distance for similarity.

Introduction MASS allows you to search a time series for a subquery resulting in an array of distances. These array of distances enable you to identif

Matrix Profile Foundation 79 Dec 31, 2022
Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection

Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection

61 Jan 07, 2023
Genetic Programming in Python, with a scikit-learn inspired API

Welcome to gplearn! gplearn implements Genetic Programming in Python, with a scikit-learn inspired and compatible API. While Genetic Programming (GP)

Trevor Stephens 1.3k Jan 03, 2023
render sprites into your desktop environment as shaped windows using GTK

spritegtk render static or animated sprites into your desktop environment as dynamic shaped windows using GTK requires pycairo and PYGobject: pip inst

hermit 20 Oct 27, 2022
Reproduces the results of the paper "Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations".

Finite basis physics-informed neural networks (FBPINNs) This repository reproduces the results of the paper Finite Basis Physics-Informed Neural Netwo

Ben Moseley 65 Dec 28, 2022
Source code for the paper "SEPP: Similarity Estimation of Predicted Probabilities for Defending and Detecting Adversarial Text" PACLIC 2021

Adversarial text generator Refer to "adversarial_text_generator"[https://github.com/quocnsh/SEPP_generator] project for generating adversarial texts A

0 Oct 05, 2021
Fibonacci Method Gradient Descent

An implementation of the Fibonacci method for gradient descent, featuring a TKinter GUI for inputting the function / parameters to be examined and a matplotlib plot of the function and results.

Emma 1 Jan 28, 2022
A set of Deep Reinforcement Learning Agents implemented in Tensorflow.

Deep Reinforcement Learning Agents This repository contains a collection of reinforcement learning algorithms written in Tensorflow. The ipython noteb

Arthur Juliani 2.2k Jan 01, 2023
DeepMoCap: Deep Optical Motion Capture using multiple Depth Sensors and Retro-reflectors

DeepMoCap: Deep Optical Motion Capture using multiple Depth Sensors and Retro-reflectors By Anargyros Chatzitofis, Dimitris Zarpalas, Stefanos Kollias

tofis 24 Oct 08, 2022
Model serving at scale

Run inference at scale Cortex is an open source platform for large-scale machine learning inference workloads. Workloads Realtime APIs - respond to pr

Cortex Labs 7.9k Jan 06, 2023
A Next Generation ConvNet by FaceBookResearch Implementation in PyTorch(Original) and TensorFlow.

ConvNeXt A Next Generation ConvNet by FaceBookResearch Implementation in PyTorch(Original) and TensorFlow. A FacebookResearch Implementation on A Conv

Raghvender 2 Feb 14, 2022
use tensorflow 2.0 to tell a dog and cat from a specified picture

dog_or_cat use tensorflow 2.0 to tell a dog and cat from a specified picture This is one of the classic experiments for the introduction of deep learn

你这个代码我看不懂 1 Oct 22, 2021
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch

Segformer - Pytorch Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch. Install $ pip install segformer-pytorch

Phil Wang 208 Dec 25, 2022
Deep Multi-Magnification Network for multi-class tissue segmentation of whole slide images

Deep Multi-Magnification Network This repository provides training and inference codes for Deep Multi-Magnification Network published here. Deep Multi

Computational Pathology 12 Aug 06, 2022
Expert Finding in Legal Community Question Answering

Expert Finding in Legal Community Question Answering Arian Askari, Suzan Verberne, and Gabriella Pasi. Expert Finding in Legal Community Question Answ

Arian Askari 3 Oct 31, 2022
[ICCV 2021] FaPN: Feature-aligned Pyramid Network for Dense Image Prediction

FaPN: Feature-aligned Pyramid Network for Dense Image Prediction [arXiv] [Project Page] @inproceedings{ huang2021fapn, title={{FaPN}: Feature-alig

EMI-Group 175 Dec 30, 2022
Embeds a story into a music playlist by sorting the playlist so that the order of the music follows a narrative arc.

playlist-story-builder This project attempts to embed a story into a music playlist by sorting the playlist so that the order of the music follows a n

Dylan R. Ashley 0 Oct 28, 2021
RoMa: A lightweight library to deal with 3D rotations in PyTorch.

RoMa: A lightweight library to deal with 3D rotations in PyTorch. RoMa (which stands for Rotation Manipulation) provides differentiable mappings betwe

NAVER 90 Dec 27, 2022
Neural Magic Eye: Learning to See and Understand the Scene Behind an Autostereogram, arXiv:2012.15692.

Neural Magic Eye Preprint | Project Page | Colab Runtime Official PyTorch implementation of the preprint paper "NeuralMagicEye: Learning to See and Un

Zhengxia Zou 56 Jul 15, 2022
Intent parsing and slot filling in PyTorch with seq2seq + attention

PyTorch Seq2Seq Intent Parsing Reframing intent parsing as a human - machine translation task. Work in progress successor to torch-seq2seq-intent-pars

Sean Robertson 160 Jan 07, 2023