Ἀνατομή is a PyTorch library to analyze representation of neural networks

Overview

anatome

Ἀνατομή is a PyTorch library to analyze internal representation of neural networks

This project is under active development and the codebase is subject to change.

Installation

anatome requires

Python>=3.9.0
PyTorch>=1.9.0
torchvision>=0.10.0

After the installation of PyTorch, install anatome as follows:

pip install -U git+https://github.com/moskomule/anatome

Available Tools

Representation Similarity

To measure the similarity of learned representation, anatome.SimilarityHook is a useful tool. Currently, the following methods are implemented.

from anatome import SimilarityHook

model = resnet18()
hook1 = SimilarityHook(model, "layer3.0.conv1")
hook2 = SimilarityHook(model, "layer3.0.conv2")
model.eval()
with torch.no_grad():
    model(data[0])
# downsampling to (size, size) may be helpful
hook1.distance(hook2, size=8)

Loss Landscape Visualization

from anatome import landscape2d

x, y, z = landscape2d(resnet18(),
                      data,
                      F.cross_entropy,
                      x_range=(-1, 1),
                      y_range=(-1, 1),
                      step_size=0.1)
imshow(z)

Fourier Analysis

  • Yin et al. NeurIPS 2019 etc.,
from anatome import fourier_map

map = fourier_map(resnet18(),
                  data,
                  F.cross_entropy,
                  norm=4)
imshow(map)

Citation

If you use this implementation in your research, please cite as:

@software{hataya2020anatome,
    author={Ryuichiro Hataya},
    title={anatome, a PyTorch library to analyze internal representation of neural networks},
    url={https://github.com/moskomule/anatome},
    year={2020}
}
Comments
  • CCA is very small between a random net vs a pretrained one, bug?

    CCA is very small between a random net vs a pretrained one, bug?

    I am getting this issue:

    import anatome
    print(anatome)
    # from anatome import CCAHook
    from anatome import SimilarityHook
    model = resnet18(pretrained=True)
    random_model = resnet18()
    # random_model = resnet18().cuda()
    # hook1 = CCAHook(model, "layer1.0.conv1")
    # hook2 = CCAHook(random_model, "layer1.0.conv1")
    cxa_dist_type = 'pwcca'
    layer_name = "layer1.0.conv1"
    hook1 = SimilarityHook(model, layer_name, cxa_dist_type)
    hook2 = SimilarityHook(random_model, layer_name, cxa_dist_type)
    with torch.no_grad():
        model(data[0])
        random_model(data[0])
    distance_btw_nets = hook1.distance(hook2, size=8)
    print(f'{distance_btw_nets=}')
    distance_btw_nets = hook1.distance(hook2, size=None)
    print(f'{distance_btw_nets=}')
    <module 'anatome' from '/Users/brando/anaconda3/envs/metalearning/lib/python3.9/site-packages/anatome/__init__.py'>
    distance_btw_nets=0.3089657425880432
    distance_btw_nets=-2.468004822731018e-08
    

    the second is suppose to use the full features but we see the error is much smaller when I expected it to increase by a lot since we are using more info since we didn't down sample.

    Is this a bug?

    opened by brando90 13
  • do you preprocess the matrices for us?

    do you preprocess the matrices for us?

    I just noticed these two paragraphs from the papers I read and was wondering if you centered the matrices or if the user has to center them for anatome to work before hand.

    Screen Shot 2021-09-15 at 4 31 04 PM Screen Shot 2021-09-15 at 4 30 48 PM

    opened by brando90 7
  • Do hooks create unexpected side effects in anatome?

    Do hooks create unexpected side effects in anatome?

    I will try really hard to make this my last question, and I won't bother you again. Do we need to do some sort of clearing after we call hook.distance in anatome? (or deep copy the models for the code to work properly)?

    e.g. modification based on your tutorial:

    def cxa_dist(mdl1: nn.Module, mdl2: nn.Module, X: Tensor, layer_name: str,
                 downsample_size: Optional[str] = None, iters: int = 1, cxa_dist_type: str = 'pwcca') -> float:
        import copy
        mdl1 = copy.deepcopy(mdl1)
        mdl2 = copy.deepcopy(mdl2)
        # get sim/dis functions
        hook1 = SimilarityHook(mdl1, layer_name, cxa_dist_type)
        hook2 = SimilarityHook(mdl2, layer_name, cxa_dist_type)
        mdl1.eval()
        mdl2.eval()
        for _ in range(iters):  # might make sense to go through multiple is NN is stochastic e.g. BN, dropout layers
            mdl1(X)
            mdl2(X)
        # - size: size of the feature map after downsampling
        dist = hook1.distance(hook2, size=downsample_size)
        # - remove hook, to make sure code stops being stateful (I hope)
        remove_hook(mdl1, hook1)
        remove_hook(mdl2, hook2)
        return float(dist)
    
    def remove_hook(mdl: nn.Module, hook):
        """
        ref: https://github.com/pytorch/pytorch/issues/5037
        """
        handle = mdl.register_forward_hook(hook)
        handle.remove()
    
    opened by brando90 5
  • How should anatome be used if we are comparing nets during training?

    How should anatome be used if we are comparing nets during training?

    Since the code is attaching hooks and it seems stateful (due to using objects instead of pure functions), how should one use anatome to compute CCAs sims etc correctly?

    Is using deep copy of the solution? Or deleting the hook after every use of similarity function?:

    def cxa_dist(mdl1: nn.Module, mdl2: nn.Module, X: Tensor, layer_name: str,
                 downsample_size: Optional[str] = None, iters: int = 1, cxa_dist_type: str = 'pwcca') -> float:
        import copy
        mdl1 = copy.deepcopy(mdl1)
        mdl2 = copy.deepcopy(mdl2)
        # print(cca_size)
        # meta_batch [T, N*K, CHW], [T, K, D]
        from anatome import SimilarityHook
        # get sim/dis functions
        hook1 = SimilarityHook(mdl1, layer_name, cxa_dist_type)
        hook2 = SimilarityHook(mdl2, layer_name, cxa_dist_type)
        mdl1.eval()
        mdl2.eval()
        for _ in range(iters):  # might make sense to go through multiple is NN is stochastic e.g. BN, dropout layers
            # x = torch_uu.torch_uu.distributions.Uniform(low=lb, high=ub).sample((num_samples_per_task, Din))
            # x = torch_uu.torch_uu.distributions.Uniform(low=-1, high=1).sample((15, 1))
            # x = torch_uu.torch_uu.distributions.Uniform(low=-1, high=1).sample((500, 1))
            mdl1(X)
            mdl2(X)
        # - size: size of the feature map after downsampling
        dist = hook1.distance(hook2, size=downsample_size)
        return float(dist)
    
    opened by brando90 4
  • Size Check

    Size Check

    Hello ~ Thank you for the implementation! It is amazing and helps me a lot!

    I have a question: in CCA, you seem to check x.size(0) < x.size(1). I believe the implementation is correct, for I have seen some similar checks in other implementations of CCA. However, I don't understand the rationale behind this. Could you explain a bit? Thanks! Also, something related (it could be other reasons), my data has a larger feature size (x.size(1)) than the number of examples (x.size(0)), and sometimes I get this error: The algorithm failed to converge because the input matrix is ill-conditioned or has too many repeated singular values. I was wondering whether they were related? Could you provide any insight on this?

    opened by Yupei-Du 3
  • How do we run the jupyter notebook example?

    How do we run the jupyter notebook example?

    Got error:

    FileNotFoundError: [Errno 2] No such file or directory: '/Users/brando/.torch/data/imagenet/val'
    

    is it possible to download some data set to test anatome such that it doesn't throw an error?

    Perhaps a colab example is better?

    a start: https://colab.research.google.com/drive/1GrhWrWFPmlc6kmxc0TJY0Nb6qOBBgjzX?usp=sharing

    opened by brando90 3
  • error on import

    error on import

    called

    ran

    import anatome

    Traceback (most recent call last):

    File "/home/cody/miniconda3/envs/RepDist/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3437, in run_code exec(code_obj, self.user_global_ns, self.user_ns)

    File "", line 1, in import anatome

    File "/home/cody/miniconda3/envs/RepDist/lib/python3.7/site-packages/anatome/init.py", line 1, in from .similarity import SimilarityHook

    File "", line 1 (x.size(0)=) ^ SyntaxError: invalid syntax

    I have anatome==0.0.1 I can provide my environment info if needed but the error looks pretty clear.

    opened by codestar12 3
  • Correcting CKA similarity to output distance by subtracting 1

    Correcting CKA similarity to output distance by subtracting 1

    corrected what seems to be that anatome is returning similarity when the function name is distance.

    Including the table from the original CKA paper to ensure my suggestion is correct indeed:

    Screen Shot 2021-02-02 at 4 52 02 PM

    paper link: http://proceedings.mlr.press/v97/kornblith19a.html

    opened by brando90 3
  • Is the way to calculate similarity just by doing 1 minus the values your library gives?

    Is the way to calculate similarity just by doing 1 minus the values your library gives?

    asking because

    1. it seems the original paper for CKA gives the values in terms of similarities and not distances so I don't understand why the library here gives them in distances
    2. make sure I don't do 1 - cca and make mistakes/is valid.

    I see that you have a bunch of 1 - value except for CKA, is that a bug?

    https://github.com/moskomule/anatome/blob/57f34fa796ffcff0ba37b5fa5142018e9f6fde61/anatome/similarity.py#L154 https://github.com/moskomule/anatome/blob/57f34fa796ffcff0ba37b5fa5142018e9f6fde61/anatome/similarity.py#L133 https://github.com/moskomule/anatome/blob/57f34fa796ffcff0ba37b5fa5142018e9f6fde61/anatome/similarity.py#L203

    opened by brando90 3
  • cca code for GPU code not working

    cca code for GPU code not working

    small example:

    import torch
    import torch.nn as nn
    from anatome import SimilarityHook
    
    from collections import OrderedDict
    
    #
    Din, Dout = 1, 1
    mdl1 = nn.Sequential(OrderedDict([
        ('fc1_l1', nn.Linear(Din, Dout)),
        ('out', nn.SELU()),
        ('fc2_l2', nn.Linear(Din, Dout)),
    ]))
    mdl2 = nn.Sequential(OrderedDict([
        ('fc1_l1', nn.Linear(Din, Dout)),
        ('out', nn.SELU()),
        ('fc2_l2', nn.Linear(Din, Dout)),
    ]))
    
    print(f'is cuda available: {torch.cuda.is_available()}')
    
    with torch.no_grad():
        mu = torch.zeros(Din)
        # std =  1.25e-2
        std = 10
        noise = torch.distributions.normal.Normal(loc=mu, scale=std).sample()
        # mdl2.fc1_l1.weight.fill_(50.0)
        # mdl2.fc1_l1.bias.fill_(50.0)
        mdl2.fc1_l1.weight += noise
        mdl2.fc1_l1.bias += noise
    
    if torch.cuda.is_available():
        mdl1 = mdl1.cuda()
        mdl2 = mdl2.cuda()
    
    hook1 = SimilarityHook(mdl1, "fc1_l1")
    hook2 = SimilarityHook(mdl2, "fc1_l1")
    mdl1.eval()
    mdl2.eval()
    
    # params for doing "good" CCA
    iters = 10
    num_samples_per_task = 500
    size = 8
    # start CCA comparision
    lb, ub = -1, 1
    
    for _ in range(iters):
        x = torch.torch.distributions.Uniform(low=-1, high=1).sample((num_samples_per_task, 1))
        if torch.cuda.is_available():
            x = x.cuda()
        y1 = mdl1(x)
        y2 = mdl2(x)
        print(f'y1 - y2 = {(y1-y2).norm(2)}')
    print('about to do cca')
    dist = hook1.distance(hook2, size=size)
    print('cca done')
    print(f'cca dist = {dist}')
    print('--> Done!\a')
    

    but it always has a segmentation error:

    (automl-meta-learning) miranda9~/automl-meta-learning $ python test_cca_gpu.py 
    is cuda available: True
    y1 - y2 = 4.561897277832031
    y1 - y2 = 3.7458858489990234
    y1 - y2 = 3.8464999198913574
    y1 - y2 = 4.947702407836914
    y1 - y2 = 5.404015064239502
    y1 - y2 = 4.85843563079834
    y1 - y2 = 4.000360488891602
    y1 - y2 = 4.194643020629883
    y1 - y2 = 4.894904613494873
    y1 - y2 = 4.7721710205078125
    about to do cca
    Segmentation fault
    

    why? how is this fixed?

    opened by brando90 3
  • potential improvement for CNN size=None (or bug?)

    potential improvement for CNN size=None (or bug?)

    I noticed for size=None you assume an activation is a neuron.

    It is possible to have this instead:

                # - convolution layer [M, C, H, W]
                if size is None:
                    # - no downsampling: [M, C, H, W] -> [M, C, H*W]
                    # an activation is an (effective) neuron is an activation (in the spatial dimension)
                    # (effective) data size is
    
                    # flatten(2) -> flatten from 2 to -1 (end)
                    self_tensor = self_tensor.flatten(start_dim=2, end_dim=-1).contiguous()
                    other_tensor = other_tensor.flatten(start_dim=2, end_dim=-1).contiguous()
    
                    # improvement [M, C, H, W] -> [M, C*H*W]
                    self_tensor = self_tensor.flatten(start_dim=3, end_dim=-1).contiguous()
                    other_tensor = other_tensor.flatten(start_dim=3, end_dim=-1).contiguous()
                    return self.cca_function(self_tensor, other_tensor).item()
    

    Original paper Screen Shot 2021-10-28 at 12 42 40 PM

    I am aware you later go and compare them by looping through each data point later, which is not exactly equivalent as the above - though that is a small nuance. That approach assumes C is the effective size of the data and that each activation in the spatial dimension is a filter. But usually a neuron (vector) is considered to have size with respect to the data set so usually it's [M, CHW] or [MHW, C]. So I'm unsure why having the filter size as the effective size of the data set for CCA is justified.

    I will go with [MHW, C] since I think the definition of a neuron per filter makes more sense and each patch seen by a filter as a data point makes more sense. I think due to the nature of CCA, this is fine to apply even across layers. If you want to know why I'm happy to copy paste that section of the background section of my paper here.

    see:

    Screen Shot 2021-10-28 at 12 48 47 PM

    Thanks for your great library and feedback!

    opened by brando90 2
  • bug in lincka?

    bug in lincka?

    isn't a 1- missing?

    https://github.com/moskomule/anatome/blob/393b36df77631590be7f4d23bff5436fa392dc0e/anatome/distance.py#L212

    https://github.com/moskomule/anatome/pull/7

    Screen Shot 2021-11-17 at 10 51 08 AM
    opened by brando90 1
  • why orthonormalize the cca combination instead of the cca vectors/canonical neurons?

    why orthonormalize the cca combination instead of the cca vectors/canonical neurons?

    Why does anatome compute pwcca by orthonormalizing the a vector instead of the CCA vector x_tilde = x @ a? e.g.

    https://github.com/moskomule/anatome/blob/393b36df77631590be7f4d23bff5436fa392dc0e/anatome/distance.py#L161

    the authors do the latter i.e. orthonormalize x_tilde not a: Screen Shot 2021-11-16 at 10 32 03 AM

    https://arxiv.org/abs/1806.05759

    current anatome code:

    def pwcca_distance(x: Tensor,
                       y: Tensor,
                       backend: str
                       ) -> Tensor:
        """ Projection Weighted CCA proposed in Marcos et al. 2018.
        Args:
            x: input tensor of Shape DxH, where D>H
            y: input tensor of Shape DxW, where D>H
            backend: svd or qr
        Returns:
        """
    
        a, b, diag = cca(x, y, backend)
        a, _ = torch.linalg.qr(a)  # reorthonormalize
        alpha = (x @ a).abs_().sum(dim=0)
        alpha /= alpha.sum()
        return 1 - alpha @ diag
    

    related: https://stackoverflow.com/questions/69993768/how-does-one-implement-pwcca-in-pytorch-match-the-original-pwcca-implemented-in

    opened by brando90 6
  • Bug in cca_by_svd computation

    Bug in cca_by_svd computation

    I am computing two random matrices that are different and totally random. Their CCA shouldn't be high since they are different and random. Google's svcca library gives me much lower CCA values but anatome's gives me CCA values that are 1.0...which obviously are wrong. Any idea where the bug might be? Been looking for it for a while:

    Code:

    #%%
    import torch
    from matplotlib import pyplot as plt
    
    from uutils.torch_uu.metrics.cca import cca_core
    
    from torch import Tensor
    from anatome.similarity import svcca_distance, cca_by_svd, cca_by_qr, _compute_cca_traditional_equation
    import numpy as np
    import random
    np.random.seed(0)
    torch.manual_seed(0)
    random.seed(0)
    
    
    # tutorial shapes (500, 10000) (500, 10000) based on MNIST with 500 neurons from a FCNN
    D, N = 500, 10_000
    
    # - creating a random baseline
    # b1 = np.random.randn(*acts1.shape)
    # b2 = np.random.randn(*acts2.shape)
    b1 = np.random.randn(D, N)
    b2 = np.random.randn(D, N)
    print('-- reproducibity finger print')
    print(f'{b1.sum()=}')
    print(f'{b2.sum()=}')
    print(f'{b1.shape=}')
    
    # - get cca values for baseline
    print("\n-- Google's SVCCA -- ")
    baseline = cca_core.get_cca_similarity(b1, b2, epsilon=1e-10, verbose=False)
    # _plot_helper(baseline["cca_coef1"], "CCA coef idx", "CCA coef value")
    # print("Baseline Mean CCA similarity", np.mean(baseline["cca_coef1"]))
    # print("Baseline CCA similarity", baseline["cca_coef1"])
    print(f'{len(baseline["cca_coef1"])=}')
    print("Baseline CCA similarity", baseline["cca_coef1"][:6])
    print(f"{np.mean(baseline['cca_coef1'])=}")
    
    # - get sklern's cca's https://scikit-learn.org/stable/modules/generated/sklearn.cross_decomposition.CCA.html
    # from sklearn.cross_decomposition import CCA
    # # cca = CCA(n_components=D)
    # cca = CCA(n_components=6)
    # cca.fit(b1, b2)
    
    # -
    print("\n-- Ultimate Anatome's SVCCA --")
    # 'svcca': partial(svcca_distance, accept_rate=0.99, backend='svd')
    # svcca_dist: Tensor = svcca_distance(x, y, accept_rate=0.99, backend='svd')
    b1_t, b2_t = torch.from_numpy(b1), torch.from_numpy(b2)
    # svcca: Tensor = 1.0 - svcca_distance(x=b1_t, y=b2_t, accept_rate=1.0, backend='svd')
    # diag: Tensor = svcca_distance(x=b1_t, y=b2_t, accept_rate=0.99, backend='svd')
    # a, b, diag = cca(x, y, backend='svd')
    a, b, diag = cca_by_svd(b1_t, b2_t)
    # a, b, diag = cca_by_qr(b1_t, b2_t)
    # diag = _compute_cca_traditional_equation(b1_t, b2_t)
    print(f'{diag.size()=}')
    print(f'{diag[:6]=}')
    print(f'{diag.mean()=}')
    # print(f'{svcca=}')
    
    print()
    

    output

    -- reproducibity finger print
    b1.sum()=686.0427476883059
    b2.sum()=2341.981561471438
    b1.shape=(500, 10000)
    -- Google's SVCCA -- 
    len(baseline["cca_coef1"])=500
    Baseline CCA similarity [0.43056735 0.42918498 0.42502398 0.42290128 0.42208184 0.41986944]
    np.mean(baseline['cca_coef1'])=0.19071397156414877
    -- Ultimate Anatome's SVCCA --
    diag.size()=torch.Size([500])
    diag[:6]=tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000], dtype=torch.float64)
    diag.mean()=tensor(1., dtype=torch.float64)
    

    related: https://stackoverflow.com/questions/69993768/how-does-one-implement-pwcca-in-pytorch-match-the-original-pwcca-implemented-in

    opened by brando90 33
Releases(0.04)
Owner
Ryuichiro Hataya
PhD student at UTokyo and RA at RIKEN AIP / focusing on DL and ML
Ryuichiro Hataya
Gif-caption - A straightforward GIF Captioner written in Python

Broksy's GIF Captioner Have you ever wanted to easily caption a GIF without havi

3 Apr 09, 2022
git《USD-Seg:Learning Universal Shape Dictionary for Realtime Instance Segmentation》(2020) GitHub: [fig2]

USD-Seg This project is an implement of paper USD-Seg:Learning Universal Shape Dictionary for Realtime Instance Segmentation, based on FCOS detector f

Ruolin Ye 80 Nov 28, 2022
Code for NeurIPS 2021 paper 'Spatio-Temporal Variational Gaussian Processes'

Spatio-Temporal Variational GPs This repository is the official implementation of the methods in the publication: O. Hamelijnck, W.J. Wilkinson, N.A.

AaltoML 26 Sep 16, 2022
This is the code for Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning

This is the code for Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning It includes /bert, which is the original BERT repos

Mitchell Gordon 11 Nov 15, 2022
This is a five-step framework for the development of intrusion detection systems (IDS) using machine learning (ML) considering model realization, and performance evaluation.

AB-TRAP: building invisibility shields to protect network devices The AB-TRAP framework is applicable to the development of Network Intrusion Detectio

Lab-C2DC - Laboratory of Command and Control and Cyber-security 17 Jan 04, 2023
This is a classifier which basically predicts whether there is a gun law in a state or not, depending on various things like murder rates etc.

Gun-Laws-Classifier This is a classifier which basically predicts whether there is a gun law in a state or not, depending on various things like murde

Awais Saleem 1 Jan 20, 2022
PyTorch implementation of probabilistic deep forecast applied to air quality.

Probabilistic Deep Forecast PyTorch implementation of a paper, titled: Probabilistic Deep Learning to Quantify Uncertainty in Air Quality Forecasting

Abdulmajid Murad 13 Nov 16, 2022
Material related to the Principles of Cloud Computing course.

CloudComputingCourse Material related to the Principles of Cloud Computing course. This repository comprises material that I use to teach my Principle

Aniruddha Gokhale 15 Dec 02, 2022
BrainGNN - A deep learning model for data-driven discovery of functional connectivity

A deep learning model for data-driven discovery of functional connectivity https://doi.org/10.3390/a14030075 Usman Mahmood, Zengin Fu, Vince D. Calhou

Usman Mahmood 3 Aug 28, 2022
Efficiently computes derivatives of numpy code.

Note: Autograd is still being maintained but is no longer actively developed. The main developers (Dougal Maclaurin, David Duvenaud, Matt Johnson, and

Formerly: Harvard Intelligent Probabilistic Systems Group -- Now at Princeton 6.1k Jan 08, 2023
Unofficial implementation of Pix2SEQ

Unofficial-Pix2seq: A Language Modeling Framework for Object Detection Unofficial implementation of Pix2SEQ. Please use this code with causion. Many i

159 Dec 12, 2022
Paper Code:A Self-adaptive Weighted Differential Evolution Approach for Large-scale Feature Selection

1. SaWDE.m is the main function 2. DataPartition.m is used to randomly partition the original data into training sets and test sets with a ratio of 7

wangxb 14 Dec 08, 2022
Neuralnetwork - Basic Multilayer Perceptron Neural Network for deep learning

Neural Network Just a basic Neural Network module Usage Example Importing Module

andreecy 0 Nov 01, 2022
This repository stores the code to reproduce the results published in "TiWS-iForest: Isolation Forest in Weakly Supervised and Tiny ML scenarios"

TinyWeaklyIsolationForest This repository stores the code to reproduce the results published in "TiWS-iForest: Isolation Forest in Weakly Supervised a

2 Mar 21, 2022
Jiminy Cricket Environment (NeurIPS 2021)

Jiminy Cricket This is the repository for "What Would Jiminy Cricket Do? Towards Agents That Behave Morally" by Dan Hendrycks*, Mantas Mazeika*, Andy

Dan Hendrycks 15 Aug 29, 2022
A data annotation pipeline to generate high-quality, large-scale speech datasets with machine pre-labeling and fully manual auditing.

About This repository provides data and code for the paper: Scalable Data Annotation Pipeline for High-Quality Large Speech Datasets Development (subm

Appen Repos 86 Dec 07, 2022
Repository for MuSiQue: Multi-hop Questions via Single-hop Question Composition

🎵 MuSiQue: Multi-hop Questions via Single-hop Question Composition This is the repository for our paper "MuSiQue: Multi-hop Questions via Single-hop

21 Jan 02, 2023
Deep Learning and Reinforcement Learning Library for Scientists and Engineers 🔥

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extens

TensorLayer Community 7.1k Dec 29, 2022
PyTorch implementation of our method for adversarial attacks and defenses in hyperspectral image classification.

Self-Attention Context Network for Hyperspectral Image Classification PyTorch implementation of our method for adversarial attacks and defenses in hyp

22 Dec 02, 2022
Predicting path with preference based on user demonstration using Maximum Entropy Deep Inverse Reinforcement Learning in a continuous environment

Preference-Planning-Deep-IRL Introduction Check my portfolio post Dependencies Gym stable-baselines3 PyTorch Usage Take Demonstration python3 record.

Tianyu Li 9 Oct 26, 2022