Repository for the AugmentedPCA Python package.

Overview

AugmentedPCA logo

Overview

This Python package provides implementations of Augmented Principal Component Analysis (AugmentedPCA) - a family of linear factor models that find a set of factors aligned with an augmenting objective in addition to the canonical PCA objective of finding factors that represent the data variance. AugmentedPCA can be split into two general families of models: adversarial AugmentedPCA and supervised AugmentedPCA.

Supervised AugmentedPCA

In supervised AugmentedPCA (SAPCA), the augmenting objective is to make the factors aligned with the data labels, or some outcome, in addition to having the factors explain the variance of the original observed or primary data. SAPCA is useful when predictivity of latent components with respects to a set of data labels or outcomes is desired. SAPCA is equivalent to a supervised autoencoder (SAE) with a single hidden layer. Therefore, SAPCA can be applied to situations where the properties of latent representations enforced via deep SAEs are desired, yet where limited data or training inconsistencies are a concern. Below is a diagram depicting the relationship between primary data, supervision data, and the resulting SAPCA factors.

SAPCA diagram

Adversarial AugmentedPCA

In adversarial AugmentedPCA (AAPCA), the augmenting objective is to make the factors orthogonal to a set of concomitant data, in addition to having the factors explain the variance of the original observed or primary data. AAPCA can be used in situations where one wishes to enforce invariance of latent components to a set of concomitant data, and is equivalent to an adversarial autoencoder with a single hidden layer. Below is a diagram depicting the relationship between primary data, concomitant data, and the resulting AAPCA factors.

AAPCA diagram

Documentation

Documentation for AugmentedPCA is available on this documentation site.

Provided documentation includes:

  • Motivation - Motivation behind AugmentedPCA models and the different approximate inference strategies.

  • Model formulation - Overview of different models and approximate inference strategies as well as more in-depth mathematical descriptions.

  • Tutorials - Step-by-step guide on how to use the different offered AugmentedPCA models.

  • Examples - Use case examples for the different models.

Dependencies

The AugmentedPCA package is written in Python, and requires Python >= 3.6 to run. AugmentedPCA relies on the following libraries and version numbers:

Installation

To install the latest stable release, use pip. Use the following command to install:

$ pip install augmented-pca

Issue Tracking and Reports

Please use the GitHub issue tracker associated with the AugmentedPCA repository for issue tracking, filing bug reports, and asking general questions about the package or project.

Quick Introduction

A quick guide to using AugmentedPCA is given in this section. For a more in-depth guide, see our documentation.

Importing AugmentedPCA Models

APCA models can be imported from the models.py module. Below we show an example of importing the AAPCA model.

# Import all AugmentedPCA models
from apca.models import AAPCA

Alternatively, all offered AugmentedPCA models can be imported at once.

# Import all AugmentedPCA models
from apca.models import *

Instantiating AugmentedPCA Models

APCA models are instantiated by assigning either an SAPCA or AAPCA object to a variable. During instantiation, one has the option to define parameters n_components, mu, which represent the number of components and the augmenting objective strength, respectively. Additionally, approximate inference strategy can be defined through the inference parameter.

# Define model parameters
n_components = 2        # factors will have dimensionality of 2
mu = 1.0                # augmenting objective strength equal to 1 
inference = 'encoded'   # encoded approximate inference strategy

# Instantiate adversarial AugmentedPCA model
aapca = AAPCA(n_components=n_components, mu=mu, inference=inference)

Fitting AugmentedPCA Models

APCA models closely follow the style and implemention of scikit-learn's PCA implementation, with many of the same methods and functionality. Similar to scikit-learn models, AugmentedPCA models are fit using the fit() method. fit() takes two parameters: X which represents the matrix of primary data and Y which represents the matrix of augmenting data.

# Import numpy
import numpy as np

# Generate synthetic data
# Note: primary and augmenting data must have same number of samples/same first dimension size
n_samp = 100
X = np.random.randn(n_samp, 20)   # primary data, 100 samples with dimensionality of 20
Y = np.random.randn(n_samp, 3)    # concomitant data, 100 samples with dimensionality of 3

# Fit adversarial AugmentedPCA instance
aapca.fit(X=X, Y=Y)

Alternatively, AugmentedPCA models can be fit using the fit_transform() method, which takes the same parameters as the fit() method but also returns a matrix of components or factors.

# Fit adversarial AugmentedPCA instance and generate components
S = aapca.fit_transform(X=X, Y=Y)

Approximate Inference Strategies

In this section, we give a brief overview of the different approximate inference strategies offered for AugmentedPCA. Inference strategy should be chosen based on the data on which the AugmentedPCA model will be used as well as the specific use case. Both SAPCA and AAPCA models use the jointly-encoded approximate inference strategy by default.

Local

In the local approximate inference strategy, the factors (local variables associated with each observation) are included in both the likelihood relating and the augmenting objective. Below is a diagram from our paper depicting the local inference strategy.

local inference diagram

Because the local variables are included in the augmenting objective, given new data we must have both primary and augmenting data to obtain factors. Thus, the local inference strategy should only be used for inference on new data when both primary and augmenting data are available. Below we show an example of how to fit a SAPCA model with local approximate inference strategy to training data and obtain factors for test data.

# Import numpy
import numpy as np

# Import supervised AugmentedPCA
from apca.models import SAPCA

# Generate synthetic data and labels
n_samp = 100
X = np.random.randn(n_samp, 20)
Y = np.random.randint(low=0, high=1, size=(n_samp, 1), dtype=int)

# Generate test/train splits
train_pct = 0.7
idx = np.arange(start=0, stop=101, step=1, dtype=int)
np.random.shuffle(idx)
n_train = int(train_pct * len(idx))
train_idx = idx[:n_train]
test_idx = idx[n_train:]

# Split data into test/train sets
X_train = X[train_idx, :]
X_test = X[test_idx, :]
Y_train = Y[train_idx, :]
Y_test = Y[test_idx, :]

# Instantiate supervised AugmentedPCA model with local approximate inference strategy
sapca = SAPCA(n_components=3, mu=5.0, inference='local')

# Fit supervised AugmentedPCA model
sapca.fit(X=X_train, Y_train)

# Generate components for test set
# Note: both primary and augmenting data are needed to obtain factors
S_test = sapca.transform(X=X_test, Y=Y_test)

Note that when factors are generated for the test set that the transform() method requires both the primary data X_test and labels Y_test be passed as parameters. For a more in-depth description of the local approximate inference strategy, see our paper or the corresponding documentation section.

Encoded

In the encoded approximate inference strategy, a linear encoder is used to transform the data into factors or components. This inference strategy is termed "encoded" because the augmenting objective is enforced via an encoding function. Below is a diagram depicting the encoded inference strategy.

encoded inference diagram

In contrast to the local inference strategy, when factors are generated for the test set under the encoded inference strategy the transform() method only requires the primary data X_test. Below we show an example of how to fit a SAPCA model with encoded approximate inference strategy to training data and obtain factors for test data.

# Instantiate supervised AugmentedPCA model model with encoded approximate inference strategy
sapca = SAPCA(n_components=3, mu=5.0, inference='encoded')

# Fit supervised AugmentedPCA model
# Note: both primary and augmenting data are required to fit the model
sapca.fit(X=X_train, Y_train)

# Generate components for test set
# Note: only primary data are needed to obtain factors
S_test = sapca.transform(X=X_test)

For a more in-depth description of the encoded approximate inference strategy, see our paper or the corresponding documentation section.

Jointly-Encoded

The jointly-encoded approximate inference strategy is similar to the encoded in that the augmenting objective is enforced through a linear encoding matrix. However, in the jointly-encoded inference strategy both the primary and augmenting data are required for computing factors, similar to the local inference strategy. Below is a diagram depicting the jointly-encoded inference strategy.

jointly-encoded inference diagram

Similar to the local inference strategy, when factors are generated for the test set under the jointly-encoded inference strategy the transform() method requires both the primary data X_test and augmenting data Y_test. Below we show an example of how to fit a SAPCA model with jointly-encoded approximate inference strategy to training data and obtain factors for test data.

# Instantiate supervised AugmentedPCA model model with encoded approximate inference strategy
sapca = SAPCA(n_components=3, mu=5.0, inference='encoded')

# Fit supervised AugmentedPCA model
# Note: both primary and augmenting data are required to fit the model
sapca.fit(X=X_train, Y_train)

# Generate components for test set
# Note: both primary and augmenting data are needed to obtain factors
S_test = sapca.transform(X=X_test)

For a more in-depth description of the jointly-encoded approximate inference strategy, see our paper or the corresponding documentation section.

Citation

Please cite our paper if you find this package helpful in your research:

@inproceedings{carson2021augmentedpca,
title={{AugmentedPCA}: {A} {P}ython {P}ackage of {S}upervised and {A}dversarial {L}inear {F}actor {M}odels},
author={{Carson IV}, William E. and Talbot, Austin and Carlson, David},
year={2021},
month={December},
booktitle={{P}roceedings of {L}earning {M}eaningful {R}epresentations of {L}ife {W}orkshop at {NeurIPS} 2021}}

Funding

This project was supported by the National Institute of Biomedical Imaging and Bioengineering and the National Institute of Mental Health through the National Institutes of Health BRAIN Initiative under Award Number R01EB026937.

Owner
Billy Carson
Biomedical Engineering PhD candidate at Duke University using machine learning to investigate neurodevelopmental conditions and learn about the human brain.
Billy Carson
Implementation for paper "Towards the Generalization of Contrastive Self-Supervised Learning"

Contrastive Self-Supervised Learning on CIFAR-10 Paper "Towards the Generalization of Contrastive Self-Supervised Learning", Weiran Huang, Mingyang Yi

Weiran Huang 13 Nov 30, 2022
Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps[AAAI2021]

Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps Here is the code for ssbassline model. We also provide OCR results/features/mode

ZephyrZhuQi 51 Nov 18, 2022
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

NN Template Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for: PyTorch Lightning,

Luca Moschella 520 Dec 30, 2022
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 09, 2023
Lipschitz-constrained Unsupervised Skill Discovery

Lipschitz-constrained Unsupervised Skill Discovery This repository is the official implementation of Seohong Park, Jongwook Choi*, Jaekyeom Kim*, Hong

Seohong Park 17 Dec 18, 2022
Source code for paper "ATP: AMRize Than Parse! Enhancing AMR Parsing with PseudoAMRs" @NAACL-2022

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs Hi this is the source code of our paper "ATP: AMRize Then Parse! Enhancing AMR Parsing w

Chen Liang 13 Nov 23, 2022
The goal of the exercises below is to evaluate the candidate knowledge and problem solving expertise regarding the main development focuses for the iFood ML Platform team: MLOps and Feature Store development.

The goal of the exercises below is to evaluate the candidate knowledge and problem solving expertise regarding the main development focuses for the iFood ML Platform team: MLOps and Feature Store dev

George Rocha 0 Feb 03, 2022
Distance-Ratio-Based Formulation for Metric Learning

Distance-Ratio-Based Formulation for Metric Learning Environment Python3 Pytorch (http://pytorch.org/) (version 1.6.0+cu101) json tqdm Preparing datas

Hyeongji Kim 1 Dec 07, 2022
[IEEE TPAMI21] MobileSal: Extremely Efficient RGB-D Salient Object Detection [PyTorch & Jittor]

MobileSal IEEE TPAMI 2021: MobileSal: Extremely Efficient RGB-D Salient Object Detection This repository contains full training & testing code, and pr

Yu-Huan Wu 52 Jan 06, 2023
Code for Emergent Translation in Multi-Agent Communication

Emergent Translation in Multi-Agent Communication PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Comm

Facebook Research 75 Jul 15, 2022
Video Corpus Moment Retrieval with Contrastive Learning (SIGIR 2021)

Video Corpus Moment Retrieval with Contrastive Learning PyTorch implementation for the paper "Video Corpus Moment Retrieval with Contrastive Learning"

ZHANG HAO 42 Dec 29, 2022
A multilingual version of MS MARCO passage ranking dataset

mMARCO A multilingual version of MS MARCO passage ranking dataset This repository presents a neural machine translation-based method for translating t

75 Dec 27, 2022
Pytoydl: A toy deep learning framework built upon numpy.

Documents: https://pytoydl.readthedocs.io/zh/latest/ Pytoydl A toy deep learning framework built upon numpy. You can star this repository to keep trac

28 Dec 10, 2022
[CVPR'21] FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space

FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space by Quande Liu, Cheng Chen, Ji

Quande Liu 178 Jan 06, 2023
Implementations of the algorithms in the paper Approximative Algorithms for Multi-Marginal Optimal Transport and Free-Support Wasserstein Barycenters

Implementations of the algorithms in the paper Approximative Algorithms for Multi-Marginal Optimal Transport and Free-Support Wasserstein Barycenters

Johannes von Lindheim 3 Oct 29, 2022
PyTorch code for ICPR 2020 paper Future Urban Scene Generation Through Vehicle Synthesis

Future urban scene generation through vehicle synthesis This repository contains Pytorch code for the ICPR2020 paper "Future Urban Scene Generation Th

Alessandro Simoni 4 Oct 11, 2021
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Website | ArXiv | Get Start | Video PIRenderer The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic

Ren Yurui 261 Jan 09, 2023
Official code for "Stereo Waterdrop Removal with Row-wise Dilated Attention (IROS2021)"

Stereo-Waterdrop-Removal-with-Row-wise-Dilated-Attention This repository includes official codes for "Stereo Waterdrop Removal with Row-wise Dilated A

29 Oct 01, 2022
A library built upon PyTorch for building embeddings on discrete event sequences using self-supervision

pytorch-lifestream a library built upon PyTorch for building embeddings on discrete event sequences using self-supervision. It can process terabyte-si

Dmitri Babaev 103 Dec 17, 2022
SOTR: Segmenting Objects with Transformers [ICCV 2021]

SOTR: Segmenting Objects with Transformers [ICCV 2021] By Ruohao Guo, Dantong Niu, Liao Qu, Zhenbo Li Introduction This is the official implementation

186 Dec 20, 2022