DeepDiffusion: Unsupervised Learning of Retrieval-adapted Representations via Diffusion-based Ranking on Latent Feature Manifold

Overview

DeepDiffusion

Introduction

This repository provides the code of the DeepDiffusion algorithm for unsupervised learning of retrieval-adapted representations. The DeepDiffusion algorithm is proposed in the following paper.

Takahiko Furuya and Ryutarou Ohbuchi,
"DeepDiffusion: Unsupervised Learning of Retrieval-adapted Representations via Diffusion-based Ranking on Latent Feature Manifold",
Currently under review.

pic

DeepDiffusion learns retrieval-adapted feature representations via ranking on a latent feature manifold. By minimizing our newly proposed Latent Manifold Ranking loss, the encoder DNN and the latent feature manifold are optimized for comparison of data samples. DeepDiffusion is applicable to a wide range of multimedia data types including 3D shape and 2D image. Unlike the existing supervised metric learning losses (e.g., the contrastive loss and the triplet loss), our DeepDiffusion can learn representations suitable for information retrieval in a fully unsupervised manner.

The instruction below describes how to prepare data (here, we use 3D point set data of the ModelNet10 dataset as an example) and how to train/evaluate feature representations by DeepDiffusion.

Pre-requisites

Our code has been tested with Python 3.6, Tensorflow 1.13 and CUDA 10.0 on Ubuntu 18.04.
Python packages required to run the code can be installed by executing the command below.

pip install tensorflow-gpu==1.13.2 scipy scikit-learn h5py sobol sobol_seq

Preparing Data

Run the shell script "Prepare_ModelNet10.sh".
This script downloads the ModelNet10 dataset and converts the 3D surface models contained the dataset to 3D point sets. These 3D point sets will be saved in the "data" directory as the HDF files.

Training the DNN by using DeepDiffusion and evaluating learned feature representations

Run the shell script "TrainAndTest_3DShape.sh".
By running this script, the PointNet [Qi, Su, et al., 2017] encoder is trained from scratch in an unsupervised manner. During the training of 300 epochs, retrieval accuracy in Mean Average Precision (MAP) of the testing dataset will be evaluated at every 10 epochs. If the training proceeds successfully, you will obtain a MAP score of nearly 80 %.

Pytorch Implementation of Continual Learning With Filter Atom Swapping (ICLR'22 Spolight) Paper

Continual Learning With Filter Atom Swapping Pytorch Implementation of Continual Learning With Filter Atom Swapping (ICLR'22 Spolight) Paper If find t

11 Aug 29, 2022
Learning Neural Network Subspaces

Learning Neural Network Subspaces Welcome to the codebase for Learning Neural Network Subspaces by Mitchell Wortsman, Maxwell Horton, Carlos Guestrin,

Apple 117 Nov 17, 2022
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".

This is the official PyTorch implementation of TMNet in the CVPR 2021 paper "Temporal Modulation Network for Controllable Space-Time VideoSuper-Resolu

Gang Xu 95 Oct 24, 2022
CZU-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and 10 wearable inertial sensors

CZU-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and 10 wearable inertial sensors โ€ƒโ€ƒIn order to facilitate the res

yujmo 11 Dec 12, 2022
Using Clinical Drug Representations for Improving Mortality and Length of Stay Predictions

Using Clinical Drug Representations for Improving Mortality and Length of Stay Predictions Usage Clone the code to local. https://github.com/tanlab/MI

Computational Biology and Machine Learning lab @ TOBB ETU 3 Oct 18, 2022
Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

zshicode 1 Nov 18, 2021
Data and analysis code for an MS on SK VOC genomes phenotyping/neutralisation assays

Description Summary of phylogenomic methods and analyses used in "Immunogenicity of convalescent and vaccinated sera against clinical isolates of ance

Finlay Maguire 1 Jan 06, 2022
This is a library for training and applying sparse fine-tunings with torch and transformers.

This is a library for training and applying sparse fine-tunings with torch and transformers. Please refer to our paper Composable Sparse Fine-Tuning f

Cambridge Language Technology Lab 37 Dec 30, 2022
Deep Learning Algorithms for Hedging with Frictions

Deep Learning Algorithms for Hedging with Frictions This repository contains the Forward-Backward Stochastic Differential Equation (FBSDE) solver and

Xiaofei Shi 3 Dec 22, 2022
Image based Human Fall Detection

Here I integrated the YOLOv5 object detection algorithm with my own created dataset which consists of human activity images to achieve low cost, high accuracy, and real-time computing requirements

UTTEJ KUMAR 12 Dec 11, 2022
Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization Official PyTorch implementation of the Fishr regularization for out-of-dist

62 Dec 22, 2022
[NeurIPS-2021] Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation

Efficient Graph Similarity Computation - (EGSC) This repo contains the source code and dataset for our paper: Slow Learning and Fast Inference: Effici

24 Dec 31, 2022
A generator of point clouds dataset for PyPipes.

CloudPipesGenerator Documentation | Colab Notebooks | Video Tutorials | Master Degree website A generator of point clouds dataset for PyPipes. TODO Us

1 Jan 13, 2022
A smaller subset of 10 easily classified classes from Imagenet, and a little more French

Imagenette ๐ŸŽถ Imagenette, gentille imagenette, Imagenette, je te plumerai. ๐ŸŽถ (Imagenette theme song thanks to Samuel Finlayson) NB: Versions of Image

fast.ai 718 Jan 01, 2023
PaddleViT: State-of-the-art Visual Transformer and MLP Models for PaddlePaddle 2.0+

PaddlePaddle Vision Transformers State-of-the-art Visual Transformer and MLP Models for PaddlePaddle ๐Ÿค– PaddlePaddle Visual Transformers (PaddleViT or

1k Dec 28, 2022
Cmsc11 arcade - Final Project for CMSC11

cmsc11_arcade Final Project for CMSC11 Developers: Limson, Mark Vincent Peรฑafiel

Gregory 1 Jan 18, 2022
An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners

An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners This is a coarse version for MAE, only make the pretrain model, the fine

FlyEgle 214 Dec 29, 2022
EigenGAN Tensorflow, EigenGAN: Layer-Wise Eigen-Learning for GANs

Gender Bangs Body Side Pose (Yaw) Lighting Smile Face Shape Lipstick Color Painting Style Pose (Yaw) Pose (Pitch) Zoom & Rotate Flush & Eye Color Mout

Zhenliang He 321 Dec 01, 2022
๊ณต๊ณต์žฅ์†Œ์—์„œ ๋ˆˆ๋งŒ ๋Œ๋ฆฌ๋ฉด CCTV๊ฐ€ ๋ณด์ธ๋‹ค๋Š” ๋ง์ด ๊ณผ์–ธ์ด ์•„๋‹ ์ •๋„๋กœ CCTV๊ฐ€ ์šฐ๋ฆฌ ์ƒํ™œ์— ๊นŠ์ˆ™์ด ์ž๋ฆฌ ์žก์•˜์Šต๋‹ˆ๋‹ค.

ObsCare_Main ์†Œ๊ฐœ ๊ณต๊ณต์žฅ์†Œ์—์„œ ๋ˆˆ๋งŒ ๋Œ๋ฆฌ๋ฉด CCTV๊ฐ€ ๋ณด์ธ๋‹ค๋Š” ๋ง์ด ๊ณผ์–ธ์ด ์•„๋‹ ์ •๋„๋กœ CCTV๊ฐ€ ์šฐ๋ฆฌ ์ƒํ™œ์— ๊นŠ์ˆ™์ด ์ž๋ฆฌ ์žก์•˜์Šต๋‹ˆ๋‹ค. CCTV์˜ ๋Œ€์ˆ˜๊ฐ€ ๊ธ‰๊ฒฉํžˆ ๋Š˜์–ด๋‚˜๋ฉด์„œ ๊ด€๋ฆฌ์™€ ํšจ์œจ์„ฑ ๋ฌธ์ œ์™€ ๋”๋ถˆ์–ด, ๊ณณ๊ณณ์— ์„ค์น˜๋œ CCTV๋ฅผ ๊ฐœ๋ณ„ ๊ด€์ œํ•˜๋Š” ๊ฒƒ์œผ๋กœ๋Š” ์‘๊ธ‰ ์ƒ

5 Jul 07, 2022
Orange Chicken: Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation

Orange Chicken: Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation This repository contains code and data f

Zoey Liu 0 Jan 07, 2022