PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

Overview

Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks.

Code, based on the PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks.

Install Requirements

Tested with python 3.8.

pip install -r requirements.txt

1. Incremental Hierarchical Tensor Rank Learning

1.1 Generating Data

Matrix Completion/Sensing

python matrix_factorization_data_generator.py --task_type completion
  • Setting task_type to "sensing" will generate matrix sensing data.
  • Use the -h flag for information on the customizable run arguments.

Tensor Completion/Sensing

python tensor_sensing_data_generator.py --task_type completion
  • Setting task_type to "sensing" will generate tensor sensing data.
  • Use the -h flag for information on the customizable run arguments.

1.2 Running Experiments

Matrix Factorization

python matrix_factorization_experiments_runner.py \
--dataset_path 
   
     \
--epochs 500000 \
--num_train_samples 2048 \
--outputs_dir "outputs/mf_exps" \
--save_logs \
--save_metric_plots \
--save_checkpoints \
--validate_every 25 \
--save_every_num_val 50 \
--epoch_log_interval 25 \
--train_batch_log_interval -1 

   
  • dataset_path should point to the dataset file generated in the previous step.
  • A folder with checkpoints, metric plots, and a log file will be automatically created under the directory specified by outputs_dir.
  • Use the -h flag for information on the customizable run arguments.

Tensor Factorization

python tensor_factorization_experiments_runner.py \
--dataset_path 
   
     \
--epochs 500000 \
--num_train_samples 2048 \
--outputs_dir "outputs/tf_exps" \
--save_logs \
--save_metric_plots \
--save_checkpoints \
--validate_every 25 \
--save_every_num_val 50 \
--epoch_log_interval 25 \
--train_batch_log_interval -1 

   
  • dataset_path should point to the dataset file generated in the previous step.
  • A folder with checkpoints, metric plots, and a log file will be automatically created under the directory specified by outputs_dir.
  • Use the -h flag for information on the customizable run arguments.

Hierarchical Tensor Factorization

python hierarchical_tensor_factorization_experiments_runner.py \
--dataset_path 
   
     \
--epochs 500000 \
--num_train_samples 2048 \
--outputs_dir "outputs/htf_exps" \
--save_logs \
--save_metric_plots \
--save_checkpoints \
--validate_every 25 \
--save_every_num_val 50 \
--epoch_log_interval 25 \
--train_batch_log_interval -1 

   
  • dataset_path should point to the dataset file generated in the previous step.
  • A folder with checkpoints, metric plots, and a log file will be automatically created under the directory specified by outputs_dir.
  • Use the -h flag for information on the customizable run arguments.

1.3 Plotting Results

Plotting metrics against the number of iterations for an experiment (or multiple experiments) can be done by:

python dynamical_analysis_results_multi_plotter.py \
--plot_config_path 
   

   
  • plot_config_path should point to a file with the plot configuration. For example, plot_configs/mf_tf_htf_dyn_plot_config.json is the configuration used to create the plot below. To run it, it suffices to fill in the checkpoint_path fields (checkpoints are created during training inside the respective experiment's folder).

Example plot:

2. Countering Locality Bias of Convolutional Networks via Regularization

2.1. Is Same Class

2.1.1 Generating Data

Generating train data is done by running:

python is_same_class_data_generator.py --train --num_samples 5000

For test data use:

python is_same_class_data_generator.py --num_samples 10000
  • Use the output_dir argument to set the output directory in which the datasets will be saved (default is ./data/is_same).
  • The flag train determines whether to generate the dataset using the train or test set of the original dataset.
  • Specify num_samples to set how many samples to generate.
  • Use the -h flag for information on the customizable run arguments.

2.1.2 Running Experiments

python is_same_class_experiments_runner.py \
--train_dataset_path 
   
     \
--test_dataset_path 
    
      \
--epochs 150 \
--outputs_dir "outputs/is_same_exps" \
--save_logs \
--save_metric_plots \
--save_checkpoints \
--validate_every 1 \
--save_every_num_val 1 \
--epoch_log_interval 1 \
--train_batch_log_interval 50 \
--stop_on_perfect_train_acc \
--stop_on_perfect_train_acc_patience 20 \
--model resnet18 \
--distance 0 \
--grad_change_reg_coeff 0

    
   
  • train_dataset_path and test_dataset_path are the paths of the train and test dataset files, respectively.
  • A folder with checkpoints, metric plots, and a log file will be automatically created under the directory specified by outputs_dir.
  • Use the -h flag for information on the customizable run arguments.

2.1.3 Plotting Results

Plotting different regularization options against the task difficulty can be done by:

\ --error_bars_opacity 0.5 ">
python locality_bias_plotter.py \
--experiments_dir 
   
     \
--experiment_groups_dir_names 
     
     
       .. \
--per_experiment_group_y_axis_value_name 
       
       
         .. \ --per_experiment_group_label 
         
         
           .. \ --x_axis_value_name "distance" \ --plot_title "Is Same Class" \ --x_label "distance between images" \ --y_label "test accuracy (%)" \ --save_plot_to 
          
            \ --error_bars_opacity 0.5 
          
         
        
       
      
     
    
   
  • Set experiments_dir to the directory containing the experiments you would like to plot.
  • Specify after experiment_groups_dir_names the names of the experiment groups, each group name should correspond to a sub-directory with the group name under experiments_dir path.
  • Use per_experiment_group_y_axis_value_name to name the report value for each experiment. Name should match key in experiment's summary.json files. Use dot notation for nested keys.
  • per_experiment_group_label sets a label for the groups by the same order they were mentioned.
  • save_plot_to is the path to save the plot at.
  • Use x_axis_value_name to set the name of the value to use as the x-axis. This should match to a key in either summary.json or config.json files. Use dot notation for nested keys.
  • Use the -h flag for information on the customizable run arguments.

Example plots:

2.2. Pathfinder

2.2.1 Generating Data

To generate Pathfinder datasets, first run the following command to create raw image samples for all specified path lengths:

python pathfinder_raw_images_generator.py \
--num_samples 20000 \
--path_lengths 3 5 7 9
  • Use the output_dir argument to set the output directory in which the raw samples will be saved (default is ./data/pathfinder/raw).
  • The samples for each path length are separated to different directories.
  • Use the -h flag for information on the customizable run arguments.

Then, use the following command to create the dataset files for all path lengths (one dataset per length):

python pathfinder_data_generator.py \
--dataset_path data/pathfinder/raw \
--num_train_samples 10000 \
--num_test_samples 10000
  • dataset_path is the path to the directory of the raw images.
  • Use the output_dir argument to set the output directory in which the datasets will be saved (default is ./data/pathfinder).
  • Use the -h flag for information on the customizable run arguments.

2.2.2 Running Experiments

python pathfinder_experiments_runner.py \
--dataset_path 
   
     \
--epochs 150 \
--outputs_dir "outputs/pathfinder_exps" \
--save_logs \
--save_metric_plots \
--save_checkpoints \
--validate_every 1 \
--save_every_num_val 1 \
--epoch_log_interval 1 \
--train_batch_log_interval 50 \
--stop_on_perfect_train_acc \
--stop_on_perfect_train_acc_patience 20 \
--model resnet18 \
--grad_change_reg_coeff 0

   
  • dataset_path should point to the dataset file generated in the previous step.
  • A folder with checkpoints, metric plots, and a log file will be automatically created under the directory specified by outputs_dir.
  • Use the -h flag for information on the customizable run arguments.

2.2.3 Plotting Results

Plotting different regularization options against the task difficulty can be done by:

\ --error_bars_opacity 0.5">
python locality_bias_plotter.py \
--experiments_dir 
   
     \
--experiment_groups_dir_names 
     
     
       .. \
--per_experiment_group_y_axis_value_name 
       
       
         .. \ --per_experiment_group_label 
         
         
           .. \ --x_axis_value_name "dataset_path" \ --plot_title "Pathfinder" \ --x_label "path length" \ --y_label "test accuracy (%)" \ --x_axis_ticks 3 5 7 9 \ --save_plot_to 
          
            \ --error_bars_opacity 0.5 
          
         
        
       
      
     
    
   
  • Set experiments_dir to the directory containing the experiments you would like to plot.
  • Specify after experiment_groups_dir_names the names of the experiment groups, each group name should correspond to a sub-directory with the group name under experiments_dir path.
  • Use per_experiment_group_y_axis_value_name to name the report value for each experiment. Name should match key in experiment's summary.json files. Use dot notation for nested keys.
  • per_experiment_group_label sets a label for the groups by the same order they were mentioned.
  • save_plot_to is the path to save the plot at.
  • Use x_axis_value_name to set the name of the value to use as the x-axis. This should match to a key in either summary.json or config.json files. Use dot notation for nested keys.
  • Use the -h flag for information on the customizable run arguments.

Example plots:

Citation

For citing the paper, you can use:

@article{razin2022implicit,
  title={Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks},
  author={Razin, Noam and Maman, Asaf and Cohen, Nadav},
  journal={arXiv preprint arXiv:2201.11729},
  year={2022}
}
Owner
Asaf
MS.c Student Computer Science
Asaf
Code for our paper "MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction" published at ICCV 2021.

MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction This repository contains the code for the p

Sven 30 Jan 05, 2023
Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch

Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch

Phil Wang 383 Jan 02, 2023
Steerable discovery of neural audio effects

Steerable discovery of neural audio effects Christian J. Steinmetz and Joshua D. Reiss Abstract Applications of deep learning for audio effects often

Christian J. Steinmetz 182 Dec 29, 2022
Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021

Introduction Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021 Prerequisites Python 3.8 and conda, get Conda CUDA 11

51 Dec 03, 2022
Parris, the automated infrastructure setup tool for machine learning algorithms.

README Parris, the automated infrastructure setup tool for machine learning algorithms. What Is This Tool? Parris is a tool for automating the trainin

Joseph Greene 319 Aug 02, 2022
Implementing Graph Convolutional Networks and Information Retrieval Mechanisms using pure Python and NumPy

Implementing Graph Convolutional Networks and Information Retrieval Mechanisms using pure Python and NumPy

Noah Getz 3 Jun 22, 2022
The code release of paper 'Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization' NIPS 2020.

Domain Generalization for Medical Imaging Classification with Linear Dependency Regularization The code release of paper 'Domain Generalization for Me

Yufei Wang 56 Dec 28, 2022
Learning to Simulate Dynamic Environments with GameGAN (CVPR 2020)

Learning to Simulate Dynamic Environments with GameGAN PyTorch code for GameGAN Learning to Simulate Dynamic Environments with GameGAN Seung Wook Kim,

199 Dec 26, 2022
Aspect-Sentiment-Multiple-Opinion Triplet Extraction (NLPCC 2021)

The code and data for the paper "Aspect-Sentiment-Multiple-Opinion Triplet Extraction" Requirements Python 3.6.8 torch==1.2.0 pytorch-transformers==1.

慢半拍 5 Jul 02, 2022
Robust and Accurate Object Detection via Self-Knowledge Distillation

Robust and Accurate Object Detection via Self-Knowledge Distillation paper:https://arxiv.org/abs/2111.07239 Environments Python 3.7 Cuda 10.1 Prepare

Weipeng Xu 6 Jul 01, 2022
Tensorflow implementation and notebooks for Implicit Maximum Likelihood Estimation

tf-imle Tensorflow 2 and PyTorch implementation and Jupyter notebooks for Implicit Maximum Likelihood Estimation (I-MLE) proposed in the NeurIPS 2021

NEC Laboratories Europe 69 Dec 13, 2022
PyTorch implementation of D2C: Diffuison-Decoding Models for Few-shot Conditional Generation.

D2C: Diffuison-Decoding Models for Few-shot Conditional Generation Project | Paper PyTorch implementation of D2C: Diffuison-Decoding Models for Few-sh

Jiaming Song 90 Dec 27, 2022
Differentiable Surface Triangulation

Differentiable Surface Triangulation This is our implementation of the paper Differentiable Surface Triangulation that enables optimization for any pe

61 Dec 07, 2022
Franka Emika Panda manipulator kinematics&dynamics simulation

pybullet_sim_panda Pybullet simulation environment for Franka Emika Panda Dependency pybullet, numpy, spatial_math_mini Simple example (please check s

0 Jan 20, 2022
AMTML-KD: Adaptive Multi-teacher Multi-level Knowledge Distillation

AMTML-KD: Adaptive Multi-teacher Multi-level Knowledge Distillation

Frank Liu 26 Oct 13, 2022
The code for our paper "AutoSF: Searching Scoring Functions for Knowledge Graph Embedding"

AutoSF The code for our paper "AutoSF: Searching Scoring Functions for Knowledge Graph Embedding" and this paper has been accepted by ICDE2020. News:

AutoML Research 64 Dec 17, 2022
A PyTorch implementation of ViTGAN based on paper ViTGAN: Training GANs with Vision Transformers.

ViTGAN: Training GANs with Vision Transformers A PyTorch implementation of ViTGAN based on paper ViTGAN: Training GANs with Vision Transformers. Refer

Hong-Jia Chen 127 Dec 23, 2022
BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer

BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer Project Page | Paper | Video State-of-the-art image-to-image translatio

47 Dec 06, 2022
Easy to use Audio Tagging in PyTorch

Audio Classification, Tagging & Sound Event Detection in PyTorch Progress: Fine-tune on audio classification Fine-tune on audio tagging Fine-tune on s

sithu3 15 Dec 22, 2022
Pytorch implementation of MLP-Mixer with loading pre-trained models.

MLP-Mixer-Pytorch PyTorch implementation of MLP-Mixer: An all-MLP Architecture for Vision with the function of loading official ImageNet pre-trained p

Qiushi Yang 2 Sep 29, 2022