PaRT: Parallel Learning for Robust and Transparent AI

Related tags

Deep LearningPaRT
Overview

PaRT: Parallel Learning for Robust and Transparent AI

This repository contains the code for PaRT, an algorithm for training a base network on multiple tasks in parallel. The diagram of PaRT is shown in the figure below.

Below, we provide details regarding dependencies and the instructions for running the code for each experiment. We have prepared scripts for each experiment to help the user have a smooth experience.

Dependencies

  • python >= 3.8
  • pytorch >= 1.7
  • scikit-learn
  • torchvision
  • tensorboard
  • matplotlib
  • pillow
  • psutil
  • scipy
  • numpy
  • tqdm

SETUP ENVIRONMENT

To setup the conda env and create the required directories go to the scripts directory and run the following commands in the terminal:

conda init bash
bash -i setupEnv.sh

Check that the final output of these commands is:

Installed torch version {---}
Virtual environment was made successfully

CIFAR 100 EXPERIMENTS

Instructions to run the code for the CIFAR100 experiments:

--------------------- BASELINE EXPERIMENTS ---------------------

To run the baseline experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i runCIFAR100Baseline.sh ../../scripts/test_case0_cifar100_baseline.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_cifar100_baseline.json to 1,2,3, or 4.

--------------------- PARALLEL EXPERIMENTS ---------------------

To run the parallel experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i runCIFAR100Parallel.sh ../../scripts/test_case0_cifar100_parallel.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_cifar100_parallel.json to 1,2,3, or 4.

CIFAR 10 AND CIFAR 100 EXPERIMENTS

Instructions to run the code for the CIFAR10 and CIFAR100 experiments:

--------------------- BASELINE EXPERIMENTS ---------------------

To run the parallel experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i runCIFAR10_100Baseline.sh ../../scripts/test_case0_cifar10_100_baseline.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_cifar10_100_baseline.json to 1,2,3, or 4.

--------------------- PARALLEL EXPERIMENTS ---------------------

To run the baseline experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i runCIFAR10_100Parallel.sh ../../scripts/test_case0_cifar10_100_parallel.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_cifar10_100_parallel.json to 1,2,3, or 4.

FIVETASKS EXPERIMENTS

The dataset for this experiment can be downloaded from the link provided by the CPG GitHub Page or Here. Instructions to run the code for the FiveTasks experiments:

--------------------- BASELINE EXPERIMENTS ---------------------

To run the baseline experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i run5TasksBaseline.sh ../../scripts/test_case0_5tasks_baseline.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_5tasks_baseline.json to 1,2,3, or 4.

--------------------- PARALLEL EXPERIMENTS ---------------------

To run the parallel experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i run5TasksParallel.sh ../../scripts/test_case0_5tasks_parallel.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_5tasks_parallel.json to 1,2,3, or 4.

Paper

Please cite our paper:

Paknezhad, M., Rengarajan, H., Yuan, C., Suresh, S., Gupta, M., Ramasamy, S., Lee H. K., PaRT: Parallel Learning Towards Robust and Transparent AI, arXiv:2201.09534 (2022)

Owner
Mahsa
I develop DL, ML, computer vision, and image processing algorithms for problems in deep learning and medical domain.
Mahsa
project page for VinVL

VinVL: Revisiting Visual Representations in Vision-Language Models Updates 02/28/2021: Project page built. Introduction This repository is the project

308 Jan 09, 2023
PyTorch implementation of UPFlow (unsupervised optical flow learning)

UPFlow: Upsampling Pyramid for Unsupervised Optical Flow Learning By Kunming Luo, Chuan Wang, Shuaicheng Liu, Haoqiang Fan, Jue Wang, Jian Sun Megvii

kunming luo 87 Dec 20, 2022
Anime Face Detector using mmdet and mmpose

Anime Face Detector This is an anime face detector using mmdetection and mmpose. (To avoid copyright issues, I use generated images by the TADNE model

198 Jan 07, 2023
[ACM MM 2021] Multiview Detection with Shadow Transformer (and View-Coherent Data Augmentation)

Multiview Detection with Shadow Transformer (and View-Coherent Data Augmentation) [arXiv] [paper] @inproceedings{hou2021multiview, title={Multiview

Yunzhong Hou 27 Dec 13, 2022
A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning

A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning Website • About • Installation • Using OpenDR

OpenDR 304 Dec 28, 2022
Code and Resources for the Transformer Encoder Reasoning Network (TERN)

Transformer Encoder Reasoning Network Code for the cross-modal visual-linguistic retrieval method from "Transformer Reasoning Network for Image-Text M

Nicola Messina 53 Dec 30, 2022
A TensorFlow 2.x implementation of Masked Autoencoders Are Scalable Vision Learners

Masked Autoencoders Are Scalable Vision Learners A TensorFlow implementation of Masked Autoencoders Are Scalable Vision Learners [1]. Our implementati

Aritra Roy Gosthipaty 59 Dec 10, 2022
The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing".

BMC The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing". BibTex entry available here. B

Orange 383 Dec 16, 2022
This repo contains code to reproduce all experiments in Equivariant Neural Rendering

Equivariant Neural Rendering This repo contains code to reproduce all experiments in Equivariant Neural Rendering by E. Dupont, M. A. Bautista, A. Col

Apple 83 Nov 16, 2022
I-BERT: Integer-only BERT Quantization

I-BERT: Integer-only BERT Quantization HuggingFace Implementation I-BERT is also available in the master branch of HuggingFace! Visit the following li

Sehoon Kim 139 Dec 27, 2022
Massively parallel Monte Carlo diffusion MR simulator written in Python.

Disimpy Disimpy is a Python package for generating simulated diffusion-weighted MR signals that can be useful in the development and validation of dat

Leevi 16 Nov 11, 2022
Implementation of Wasserstein adversarial attacks.

Stronger and Faster Wasserstein Adversarial Attacks Code for Stronger and Faster Wasserstein Adversarial Attacks, appeared in ICML 2020. This reposito

21 Oct 06, 2022
Self-Learning - Books Papers, Courses & more I have to learn soon

Self-Learning This repository is intended to be used for personal use, all rights reserved to respective owners, please cite original authors and ask

Achint Chaudhary 968 Jan 02, 2022
Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation".

FPS-Net Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation", accepted by ISPRS journal of Photogrammetry

15 Nov 30, 2022
Lexical Substitution Framework

LexSubGen Lexical Substitution Framework This repository contains the code to reproduce the results from the paper: Arefyev Nikolay, Sheludko Boris, P

Samsung 37 Sep 15, 2022
This is a collection of simple PyTorch implementations of neural networks and related algorithms. These implementations are documented with explanations,

labml.ai Deep Learning Paper Implementations This is a collection of simple PyTorch implementations of neural networks and related algorithms. These i

labml.ai 16.4k Jan 09, 2023
Machine learning, in numpy

numpy-ml Ever wish you had an inefficient but somewhat legible collection of machine learning algorithms implemented exclusively in NumPy? No? Install

David Bourgin 11.6k Dec 30, 2022
DANet for Tabular data classification/ regression.

Deep Abstract Networks A PyTorch code implemented for the submission DANets: Deep Abstract Networks for Tabular Data Classification and Regression. Do

Ronnie Rocket 55 Sep 14, 2022
Reproduce ResNet-v2(Identity Mappings in Deep Residual Networks) with MXNet

Reproduce ResNet-v2 using MXNet Requirements Install MXNet on a machine with CUDA GPU, and it's better also installed with cuDNN v5 Please fix the ran

Wei Wu 531 Dec 04, 2022
TriMap: Large-scale Dimensionality Reduction Using Triplets

TriMap TriMap is a dimensionality reduction method that uses triplet constraints to form a low-dimensional embedding of a set of points. The triplet c

Ehsan Amid 235 Dec 24, 2022