This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Overview

Deep Continuous Clustering

Introduction

This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper):

Sohil Atul Shah and Vladlen Koltun. Deep Continuous Clustering.

If you use this code in your research, please cite our paper.

@article{shah2018DCC,
	author    = {Sohil Atul Shah and Vladlen Koltun},
	title     = {Deep Continuous Clustering},
	journal   = {arXiv:1803.01449},
	year      = {2018},
}

The source code and dataset are published under the MIT license. See LICENSE for details. In general, you can use the code for any purpose with proper attribution. If you do something interesting with the code, we'll be happy to know. Feel free to contact us.

Requirement

Pretraining SDAE

Note: Please find required files and checkpoints for MNIST dataset shared here.

Please create new folder for each dataset under the data folder. Please follow the structure of mnist dataset. The training and the validation data for each dataset must be placed under their respective folder.

We have already provided train and test data files for MNIST dataset. For example, one can start pretraining of SDAE from console as follows:

$ python pretraining.py --data mnist --tensorboard --id 1 --niter 50000 --lr 10 --step 20000

Different settings for total iterations, learning rate and stepsize may be required for other datasets. Please find the details under the comment section inside the pretraining file.

Extracting Pretrained Features

The features from the pretrained SDAE network are extracted as follows:

$ python extract_feature.py --data mnist --net checkpoint_4.pth.tar --features pretrained

By default, the model checkpoint for pretrained SDAE NW is stored under results.

Copying mkNN graph

The copyGraph program is used to merge the preprocessed mkNN graph (using the code provided by RCC) and the extracted pretrained features. Note the mkNN graph is built on the original and not on the SDAE features.

$ python copyGraph.py --data mnist --graph pretrained.mat --features pretrained.pkl --out pretrained

The above command assumes that the graph is stored in the pretrained.mat file and the merged file is stored back to pretrained.mat file.

DCC searches for the file with name pretrained.mat. Hence please retain the name.

Running Deep Continuous Clustering

Once the features are extracted and graph details merged, one can start training DCC algorithm.

For sanity check, we have also provided a pretrained.mat and SDAE model files for the MNIST dataset located under the data folder. For example, one can run DCC on MNIST from console as follows:

$ python DCC.py --data mnist --net checkpoint_4.pth.tar --tensorboard --id 1

The other preprocessed graph files can be found in gdrive folder as provided by the RCC.

Evaluation

Towards the end of run of DCC algorithm, i.e., once the stopping criterion is met, DCC starts evaluating the cluster assignment for the total dataset. The evaluation output is logged into tensorboard logger. The penultimate evaluated output is reported in the paper.

Like RCC, the AMI definition followed here differs slightly from the default definition found in the sklearn package. To match the results listed in the paper, please modify it accordingly.

The tensorboard logs for both pretraining and DCC will be stored in the "runs/DCC" folder under results. The final embedded features 'U' and cluster assignment for each sample is saved in 'features.mat' file under results.

Creating input

The input file for SDAE pretraining, traindata.mat and testdata.mat, stores the features of the 'N' data samples in a matrix format N x D. We followed 4:1 ratio to split train and validation data. The provided make_data.py can be used to build training and validation data. The distinction of training and validation set is used only for the pretraining stage. For end-to-end training, there is no such distinction in unsupervised learning and hence all data has been used.

To construct mkNN edge set and to create preprocessed input file, pretrained.mat, from the raw feature file, use edgeConstruction.py released by RCC. Please follow the instruction therein. Note that mkNN graph is built on the complete dataset. For simplicity, code (post pretraining phase) follows the data ordering of [trainset, testset] to arrange the data. This should be consistent even with mkNN construction.

Understanding Steps Through Visual Example

Generate 2D clustered data with

python make_data.py --data easy

This creates 3 clusters where the centers are colinear to each other. We would then expect to only need 1 dimensional latent space (either x or y) to uniquely project the data onto the line passing through the center of the clusters.

generated ground truth

Construct mKNN graph with

python edgeConstruction.py --dataset easy --samples 600

Pretrain SDAE with

python pretraining.py --data easy --tensorboard --id 1 --niter 500 --dim 1 --lr 0.0001 --step 300

You can debug the pretraining losses using tensorboard (needs tensorflow) with

tensorboard --logdir data/easy/results/runs/pretraining/1/

Then navigate to the http link that is logged in console.

Extract pretrained features

python extract_feature.py --data easy --net checkpoint_2.pth.tar --features pretrained --dim 1

Merge preprocessed mkNN graph and the pretrained features with

python copyGraph.py --data easy --graph pretrained.mat --features pretrained.pkl --out pretrained

Run DCC with

python DCC.py --data easy --net checkpoint_2.pth.tar --tensorboard --id 1 --dim 1

Debug and show how the representatives shift over epochs with

tensorboard --logdir data/easy/results/runs/DCC/1/ --samples_per_plugin images=100

Pretraining and DCC together in one script

See easy_example.py for the previous easy to visualize example all steps done in one script. Execute the script to perform the previous section all together. You can visualize the results, such as how the representatives drift over iterations with the tensorboard command above and navigating to the Images tab.

With an autoencoder, the representatives shift over epochs like: shift with autoencoder

Owner
Sohil Shah
Research Scientist
Sohil Shah
Code for "Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks", CVPR 2021

Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks This repository contains the code that accompanies our CVPR 20

Despoina Paschalidou 161 Dec 20, 2022
This repo provides a demo for the CVPR 2021 paper "A Fourier-based Framework for Domain Generalization" on the PACS dataset.

FACT This repo provides a demo for the CVPR 2021 paper "A Fourier-based Framework for Domain Generalization" on the PACS dataset. To cite, please use:

105 Dec 17, 2022
Code for paper "Document-Level Argument Extraction by Conditional Generation". NAACL 21'

Argument Extraction by Generation Code for paper "Document-Level Argument Extraction by Conditional Generation". NAACL 21' Dependencies pytorch=1.6 tr

Zoey Li 87 Dec 26, 2022
Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented at RAI 2021.

Can Active Learning Preemptively Mitigate Fairness Issues? Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented a

ElementAI 7 Aug 12, 2022
Software & Hardware to do multi color printing with Sharpies

3D Print Colorizer is a combination of 3D printed parts and a Cura plugin which allows anyone with an Ender 3 like 3D printer to produce multi colored

343 Jan 06, 2023
NP DRAW paper released code

NP-DRAW: A Non-Parametric Structured Latent Variable Model for Image Generation This repo contains the official implementation for the NP-DRAW paper.

ZENG Xiaohui 22 Mar 13, 2022
sktime companion package for deep learning based on TensorFlow

NOTE: sktime-dl is currently being updated to work correctly with sktime 0.6, and wwill be fully relaunched over the summer. The plan is Refactor and

sktime 573 Jan 05, 2023
Time series annotation library.

CrowdCurio Time Series Annotator Library The CrowdCurio Time Series Annotation Library implements classification tasks for time series. Features Suppo

CrowdCurio 51 Sep 15, 2022
The Submission for SIMMC 2.0 Challenge 2021

The Submission for SIMMC 2.0 Challenge 2021 challenge website Requirements python 3.8.8 pytorch 1.8.1 transformers 4.8.2 apex for multi-gpu nltk Prepr

5 Jul 26, 2022
Fast and robust clustering of point clouds generated with a Velodyne sensor.

Depth Clustering This is a fast and robust algorithm to segment point clouds taken with Velodyne sensor into objects. It works with all available Velo

Photogrammetry & Robotics Bonn 957 Dec 21, 2022
DeOldify - A Deep Learning based project for colorizing and restoring old images (and video!)

DeOldify - A Deep Learning based project for colorizing and restoring old images (and video!)

Jason Antic 15.8k Jan 04, 2023
🏆 The 1st Place Submission to AICity Challenge 2021 Natural Language-Based Vehicle Retrieval Track (Alibaba-UTS submission)

AI City 2021: Connecting Language and Vision for Natural Language-Based Vehicle Retrieval 🏆 The 1st Place Submission to AICity Challenge 2021 Natural

82 Dec 29, 2022
Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning. CVPR 2018

Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning Tensorflow code and models for the paper: Large Scale Fine-Grained Categ

Yin Cui 187 Oct 01, 2022
A curated list of awesome papers for Semantic Retrieval (TOIS Accepted: Semantic Models for the First-stage Retrieval: A Comprehensive Review).

A curated list of awesome papers for Semantic Retrieval (TOIS Accepted: Semantic Models for the First-stage Retrieval: A Comprehensive Review).

Yinqiong Cai 189 Dec 28, 2022
Official Implementation for Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation

Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation We present a generic image-to-image translation framework, pixel2style2pixel (pSp

2.8k Dec 30, 2022
A repository that finds a person who looks like you by using face recognition technology.

Find Your Twin Hello everyone, I've always wondered how casting agencies do the casting for a scene where a certain actor is young or old for a movie

Cengizhan Yurdakul 3 Jan 29, 2022
[TOG 2021] PyTorch implementation for the paper: SofGAN: A Portrait Image Generator with Dynamic Styling.

This repository contains the official PyTorch implementation for the paper: SofGAN: A Portrait Image Generator with Dynamic Styling. We propose a SofGAN image generator to decouple the latent space o

Anpei Chen 694 Dec 23, 2022
Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective

Does-MAML-Only-Work-via-Feature-Re-use-A-Data-Set-Centric-Perspective Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective Installin

2 Nov 07, 2022
Dark Finix: All in one hacking framework with almost 100 tools

Dark Finix - Hacking Framework. Dark Finix is a all in one hacking framework wit

Md. Nur habib 2 Feb 18, 2022
Fastquant - Backtest and optimize your trading strategies with only 3 lines of code!

fastquant 🤓 Bringing backtesting to the mainstream fastquant allows you to easily backtest investment strategies with as few as 3 lines of python cod

Lorenzo Ampil 1k Dec 29, 2022