Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

Overview

Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

Getting Started

Install requirements with Anaconda:

conda env create -f environment.yml

Activate the conda environment

conda activate tvae

Install the tvae package

Install the tvae package inside of your conda environment. This allows you to run experiments with the tvae command. At the root of the project directory run (using your environment's pip): pip3 install -e .

If you need help finding your environment's pip, try which python, which should point you to a directory such as .../anaconda3/envs/tvae/bin/ where it will be located.

(Optional) Setup Weights & Biases:

This repository uses Weight & Biases for experiment tracking. By deafult this is set to off. However, if you would like to use this (highly recommended!) functionality, all you have to do is set 'wandb_on': True in the experiment config, and set your account's project and entity names in the tvae/utils/logging.py file.

For more information on making a Weight & Biases account see (creating a weights and biases account) and the associated quickstart guide.

Running an experiment

To evaluate the selectivity of pretrained alexnet (the non-topographic baseline), you can run:

  • tvae --name 'ffa_modeling_pretrained_alexnet'

To train and evaluate the selectivity of the TVAE for objects, faces, bodies, and places, you can run:

  • tvae --name 'ffa_modeling_fc6'

To train and evaluate the selectivity of the the TDANN for objects, faces, bodies, and places, you can run:

  • tvae --name 'ffa_modeling_tdann'

To evaluate the selectivity of the TVAE on abstract catagories (animacy vs. inanimacy):

  • tvae --name 'ffa_modeling_fc6_functional'

To evaluate the selectivity of the TDANN on abstract catagories (animacy vs. inanimacy):

  • tvae --name 'ffa_modeling_tdann_functional'

These 'functional' experiment files can also be easily modified to test selectivity to big vs. small objects by simply changing the directories of the input images.

Basics of the framework

  • All experiments can be found in tvae/experiments/, and begin with the model specification, followed by the experiment config.

Model Architecutre Options

  • 'mu_init': int, Initalization value for mu parameter
  • 's_dim': int, Dimensionality of the latent space
  • 'k': int, size of the summation kernel used to define the local topographic structure
  • 'group_kernel': tuple of int, defines the size of the kernel used by the grouper, exact definition and relationship to W varies for each experiment.

Training Options

  • 'wandb_on': bool, if True, use weights & biases logging
  • 'lr': float, learning rate
  • 'momentum': float, standard momentum used in SGD
  • 'max_epochs': int, total training epochs
  • 'eval_epochs': int, epochs between evaluation on the test (for MNIST)
  • 'batch_size': int, number of samples per batch
  • 'n_is_samples': int, number of importance samples when computing the log-likelihood on MNIST.
ScriptProfilerPy - Module to visualize where your python script is slow

ScriptProfiler helps you track where your code is slow It provides: Code lines t

Lucas BLP 3 Jun 02, 2022
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

130 Dec 25, 2022
CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching(CVPR2021)

CFNet(CVPR 2021) This is the implementation of the paper CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching, CVPR 2021, Zhelun Shen, Yuch

106 Dec 28, 2022
🐦 Quickly annotate data from the comfort of your Jupyter notebook

🐦 pigeon - Quickly annotate data on Jupyter Pigeon is a simple widget that lets you quickly annotate a dataset of unlabeled examples from the comfort

Anastasis Germanidis 647 Jan 05, 2023
Implementation of OpenAI paper with Simple Noise Scale on Fastai V2

README Implementation of OpenAI paper "An Empirical Model of Large-Batch Training" for Fastai V2. The code is based on the batch size finder implement

13 Dec 10, 2021
Manifold-Mixup implementation for fastai V2

Manifold Mixup Unofficial implementation of ManifoldMixup (Proceedings of ICML 19) for fast.ai (V2) based on Shivam Saboo's pytorch implementation of

Nestor Demeure 16 Jul 25, 2022
Pixel-level Crack Detection From Images Of Levee Systems : A Comparative Study

PIXEL-LEVEL CRACK DETECTION FROM IMAGES OF LEVEE SYSTEMS : A COMPARATIVE STUDY G

Manisha Panta 2 Jul 23, 2022
Pytorch implementation of Compressive Transformers, from Deepmind

Compressive Transformer in Pytorch Pytorch implementation of Compressive Transformers, a variant of Transformer-XL with compressed memory for long-ran

Phil Wang 118 Dec 01, 2022
World Models with TensorFlow 2

World Models This repo reproduces the original implementation of World Models. This implementation uses TensorFlow 2.2. Docker The easiest way to hand

Zac Wellmer 234 Nov 30, 2022
Official implementation for the paper "Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection"

Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection PyTorch code release of the paper "Attentive Prototypes for Sour

Deepti Hegde 23 Oct 17, 2022
Official code for paper "Demystifying Local Vision Transformer: Sparse Connectivity, Weight Sharing, and Dynamic Weight"

Demysitifing Local Vision Transformer, arxiv This is the official PyTorch implementation of our paper. We simply replace local self attention by (dyna

138 Dec 28, 2022
Pytorch implementation of the unsupervised object discovery method LOST.

LOST Pytorch implementation of the unsupervised object discovery method LOST. More details can be found in the paper: Localizing Objects with Self-Sup

Valeo.ai 189 Dec 25, 2022
Space Time Recurrent Memory Network - Pytorch

Space Time Recurrent Memory Network - Pytorch (wip) Implementation of Space Time Recurrent Memory Network, recurrent network competitive with attentio

Phil Wang 50 Nov 07, 2021
[WACV21] Code for our paper: Samuel, Atzmon and Chechik, "From Generalized zero-shot learning to long-tail with class descriptors"

DRAGON: From Generalized zero-shot learning to long-tail with class descriptors Paper Project Website Video Overview DRAGON learns to correct the bias

Dvir Samuel 25 Dec 06, 2022
a minimal terminal with python 😎😉

Meterm a terminal with python 😎 How to use Clone Project: $ git clone https://github.com/motahharm/meterm.git Run: in Terminal: meterm.exe Or pip ins

Motahhar.Mokfi 5 Jan 28, 2022
A Small and Easy approach to the BraTS2020 dataset (2D Segmentation)

BraTS2020 A Light & Scalable Solution to BraTS2020 | Medical Brain Tumor Segmentation (2D Segmentation) Developed the segmentation models for segregat

Gunjan Haldar 0 Jan 19, 2022
LERP : Label-dependent and event-guided interpretable disease risk prediction using EHRs

LERP : Label-dependent and event-guided interpretable disease risk prediction using EHRs This is the code for the LERP. Dataset The dataset used is MI

5 Jun 18, 2022
AI创造营 :Metaverse启动机之重构现世,结合PaddlePaddle 和 Wechaty 创造自己的聊天机器人

paddle-wechaty-Zodiac AI创造营 :Metaverse启动机之重构现世,结合PaddlePaddle 和 Wechaty 创造自己的聊天机器人 12星座若穿越科幻剧,会拥有什么超能力呢?快来迎接你的专属超能力吧! 现在很多年轻人都喜欢看科幻剧,像是复仇者系列,里面有很多英雄、超

105 Dec 22, 2022
GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond

GCNet for Object Detection By Yue Cao, Jiarui Xu, Stephen Lin, Fangyun Wei, Han Hu. This repo is a official implementation of "GCNet: Non-local Networ

Jerry Jiarui XU 1.1k Dec 29, 2022
A curated list of references for MLOps

A curated list of references for MLOps

Larysa Visengeriyeva 9.3k Jan 07, 2023