This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures using receptive field analysis (RFA) and create graph visualizations of your architecture.

Overview

ReceptiveFieldAnalysisToolbox

CI Status Documentation Status

Poetry black pre-commit

PyPI Version Supported Python versions License

This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures using receptive field analysis (RFA) and create graph visualizations of your architecture.

Installation

Install this via pip:

pip install rfa_toolbox

What is Receptive Field Analysis?

Receptive Field Analysis (RFA) is a simple yet effective way to optimize the efficiency of any neural architecture without training it.

Usage

This library allows you to look for certain inefficiencies withing your convolutional neural network setup without ever training the model. You can do this simply by importing your architecture into the format of RFA-Toolbox and then use the in-build functions to visualize your architecture using GraphViz. The visualization will automatically mark layers predicted to be unproductive red and critical layers, that are potentially unproductive orange. In edge case scenarios, where the receptive field expands of the boundaries of the image on some but not all tensor-axis, the layer will be marked yellow, since such a layer is probably not operating and maximum efficiency. Being able to detect these types of inefficiencies is especially useful if you plan to train your model on resolutions that are substantially lower than the design-resolution of most models. As an alternative, you can also use the graph from RFA-Toolbox to hook RFA-toolbox more directly into your program.

Examples

There are multiple ways to import your model into RFA-Toolbox for analysis, with additional ways being added in future releases.

PyTorch

The simplest way of importing a model is by directly extracting the compute-graph from the PyTorch-implementation of your model. Here is a simple example:

import torchvision
from rfa_toolbox import create_graph_from_pytorch_model, visualize_architecture
model = torchvision.models.alexnet()
graph = create_graph_from_pytorch_model(model)
visualize_architecture(
    graph, f"alexnet_32_pixel", input_res=32
).view()

This will create a graph of your model and visualize it using GraphViz and color all layers that are predicted to be unproductive for an input resolution of 32x32 pixels: rf_stides.PNG

Keep in mind that the Graph is reverse-engineerd from the PyTorch JIT-compiler, therefore no looping-logic is allowed within the forward pass of the model.

Custom

If you are not able to automatically import your model from PyTorch or you just want some visualization, you can also directly implement the model with the propriatary-Graph-format of RFA-Toolbox. This is similar to coding a compute-graph in a declarative style like in TensorFlow 1.x.

from rfa_toolbox import visualize_architecture
from rfa_toolbox.graphs import EnrichedNetworkNode, LayerDefinition


conv1 = EnrichedNetworkNode(
    name="Conv1",
    layer_info=LayerDefinition(
        name="Conv3x3",
        kernel_size=3, stride_size=1,
        filters=64
    ),
    predecessors=[]
)
conv2 = EnrichedNetworkNode(
    name="Conv2",
    layer_info=LayerDefinition(
        name="Conv3x3",
        kernel_size=3, stride_size=1,
        filters=128
    ),
    predecessors=[conv1]
)

conv3 = EnrichedNetworkNode(
    name="Conv3",
    layer_info=LayerDefinition(
        name="Conv3x3",
        kernel_size=3, stride_size=1,
        filters=256
    ),
    predecessors=[conv1]
)

conv4 = EnrichedNetworkNode(
    name="Conv4",
    layer_info=LayerDefinition(
        name="Conv3x3",
        kernel_size=3, stride_size=1,
        filters=256
    ),
    predecessors=[conv2, conv3]
)

out = EnrichedNetworkNode(
    name="Softmax",
    layer_info=LayerDefinition(
        name="Fully Connected",
        units=1000
    ),
    predecessors=[conv4]
)
visualize_architecture(
    out, f"example_model", input_res=32
).view()

This will produce the following graph:

simple_conv.png

A quick primer on the Receptive Field

To understand how RFA works, we first need to understand what a receptive field is, and it's effect on what the network is learning to detect. Every layer in a (convolutional) neural network has a receptive field. It can be considered the "field of view" of this layer. In more precise terms, we define a receptive field as the area influencing the output of a single position of the convolutional kernel. Here is a simple, 1-dimensional example: rf.PNG The first layer of this simple architecture can only ever "see" the information the input pixels directly under it's kernel, in this scenario 3 pixels. Another observation we can make from this example is that the receptive field size is expanding from layer to layer. This is happening, because the consecutive layers also have kernel sizes greater than 1 pixel, which means that they combine multiple adjacent positions on the feature map into a single position in their output. In other words, every consecutive layer adds additional context to each feature map position by expanding the receptive field. This ultimately allows networks to go from detecting small and simple patterns to big and very complicated ones.

The effective size of the kernel is not the only factor influence the growth of the receptive field size. Another important factor is the stride size: rf_stides.PNG The stride size is the size of the step between the individual kernel positions. Commonly, every possible position is evaluated, which is not affecting the receptive field size in any way. When the stride size is greater than one however valid positions of the kernel are skipped, which reduces the size of the feature map. Since now information on the feature map is now condensed on fewer feature map positions, the growth of the receptive field is multiplied for future layers. In real-world architectures, this is typically the case when downsampling layers like convolutions with a stride size of 2 are used.

Why does the Receptive Field Matter?

At this point you may be wondering why the receptive field of all things is useful for optimizing an architecture. The short answer to this is: because it influences where the network can process patterns of a certain size. Simply speaking each convolutional layer is only able to detect patterns of a certain size because of its receptive field. Interestingly this also means that there is an upper limit to the usefulness of expanding the receptive field. At the latest, this is the case when the receptive field of a layer is BIGGER than the input image, since no novel context can be added at this point. For convolutional layers this is a problem, because layers past this "Border Layer" now lack the primary mechenism convolutional layers use to improve the intermediate representation of the data, making these layers unproductive. If you are interested in the details of this phenomenon I recommend that you read these:

Optimizing Architectures using Receptive Field Analysis

So far, we learned that the expansion of the receptive field is the primary mechanism for improving the intermediate solution utilized by convolutional layers. At the point where this is no longer possible, layers are not able to contribute to the quality of the output of the model and become unproductive. We refer to these layers as unproductive layers. Layers who advance the receptive field sizes beyond the input resolution are referred to as critical layers. Critical layers are not necessarily unproductive, since they are still able to incorporate some novel context into the data, depending on how large the receptive field size of the input is.

Of course, being able to predict why and which layer will become dead weight during training is highly useful, since we can now adjust the design of the architecture to fit our input resolution better without spending time on training models. Depending on the requirements, we may choose to emphasize efficiency by primarily removing unproductive layers. Another option is to focus on predictive performance by making the unproductive layers productive again.

We now illustrate how you may choose to optimize an architecture on a simple example:

Let's take the ResNet architecture, which is a very popular CNN-model. We want to train ResNet18 on ResizedImageNet16, which has a 16 pixel input resolution. When we apply Receptive Field Analysis, we can see that most convolutional layers will in fact not contribute to the inference process (unproductive layers marked red, probable unproductive layers marked orange):

resnet18.PNG

We can clearly see that most of the network's layers will not contribute anything useful to the quality of the output, since their receptive field sizes are too large.

From here on we have multiple ways of optimizing the setup. Of course, we can simply increase the resolution, to involve more layers in the inference process, but that is usually very expensive from a computational point of view. In the first scenario, we are not interested in increasing the predictive performance of the model, we simply want to save computational resources. We reduce the kernel size of the first layer to 3x3 from 7x7. This change allows the first three building blocks to contribute more to the quality of the prediction, since no layer is predicted to be unproductive. We then simply replace the remaining building blocks with a simple output head. This new architecture then looks like this:

resnet18eff.PNG

Note that all previously unproductive layers are now either removed or only marked as "critical", which is generally not a big problem, since the receptive field size is "reset" by the receptive field size after each building block. Also note that fully connected layers are always marked as critical or unproductive, since they technically have an infinite receptive field size.

The resulting architecture achieves slightly better predictive performance as the original architecture, but with substantially lower computational cost. In this case we save approx. 80% of the computational cost and improve the predictive performance slightly from 17% to 18%.

In another scenario we may not be satisfied with the predictive performance. In other words, we want to make use of the underutilized parameters of the network by turning all unproductive layers into productive layers. We achieve this by changing their receptive field sizes. The biggest lever when it comes to changing the receptive field size is always the quantity of downsampling layers. Downsampling layers have a multiplicative effect on the growth of the receptive field for all consecutive layers. We can exploit this by simply removing the MaxPooling layer, which is the second layer of the original architecture. We also reduce the kernel size of the first layer to 3x3 from 7x7, and it's stride size to 1. This drastically reduces the receptive field sizes of the entire architecture, making most layers productive again. We address the remaining unproductive layers to by removing the final downsampling layer and distributing the building blocks as evenly as possible among the three stages between the remaining downsampling layers.

The resulting architecture now looks like this:

resnet18perf.PNG

The architecture now no longer has unproductive layers in their building blocks and only 2 critical layers. This improved architecture also achieves 34% Top1-Accuracy in ResizedImageNet16 instead of the 17% of the original architecture. However, this improvement comes at a price, since the removed downsampling layers have a negative impact on the computations required to process an image, which increases by roughly a factor of 8.

In any way, RFAToolbox allows you to optimize your convolutional neural network architectures for efficiency, performance or a sweetspot between the two without the need for long-running trial-and-error sessions.

Credits

This package was created with Cookiecutter and the browniebroke/cookiecutter-pypackage project template.

Comments
  • required keyword attribute 'name' is undefined

    required keyword attribute 'name' is undefined

    This layer uses a custom function in forward and yields

        graph = create_graph_from_pytorch_model(m, input_res=in_shape)
    
      File "D:\Anaconda\envs\pyt\lib\site-packages\rfa_toolbox\encodings\pytorch\ingest_architecture.py", line 291, in create_graph_from_model
        return make_graph(tm, ref_mod=model).to_graph()
    
      File "D:\Anaconda\envs\pyt\lib\site-packages\rfa_toolbox\encodings\pytorch\ingest_architecture.py", line 172, in make_graph
        make_graph(
    
      File "D:\Anaconda\envs\pyt\lib\site-packages\rfa_toolbox\encodings\pytorch\ingest_architecture.py", line 172, in make_graph
        make_graph(
    
      File "D:\Anaconda\envs\pyt\lib\site-packages\rfa_toolbox\encodings\pytorch\ingest_architecture.py", line 172, in make_graph
        make_graph(
    
      File "D:\Anaconda\envs\pyt\lib\site-packages\rfa_toolbox\encodings\pytorch\ingest_architecture.py", line 132, in make_graph
        submodule_name = find_name(list(n.inputs())[0], self_input)
    
      File "D:\Anaconda\envs\pyt\lib\site-packages\rfa_toolbox\encodings\pytorch\ingest_architecture.py", line 34, in find_name
        cur = i.node().s("name")
    
    RuntimeError: required keyword attribute 'name' is undefined
    

    Perhaps default to a generic name if it can't be extracted

    bug 
    opened by OverLordGoldDragon 11
  • Is GraphViz an essential program of rfa_toolbox?

    Is GraphViz an essential program of rfa_toolbox?

    Describe the bug I met this error as,

    Traceback (most recent call last): ... File "...\lib\site-packages\graphviz_tools.py", line 172, in wrapper return func(*args, **kwargs) File "...\lib\site-packages\graphviz\backend[rendering.py](http://rendering.py)", line 317, in render execute.run_check(cmd, File "...\lib\site-packages\graphviz\backend[execute.py](http://execute.py)", line 88, in run_check raise ExecutableNotFound(cmd) from e graphviz.backend.execute.ExecutableNotFound: failed to execute WindowsPath('dot'), make sure the Graphviz executables are on your systems' PATH

    To Reproduce Steps to reproduce the behavior:

    Additional context I am wondering whether GraphViz is an essential program of rfa_toolbox.

    Your answer and guide will be appreciated!

    bug 
    opened by songyuc 5
  • Support for loading tensorflow model

    Support for loading tensorflow model

    Hi Team,

    Great work, It would be great if its possible to load tensorflow models as well. Hoping to see the feature soon.

    Thanks and Regards, Ramson Jehu K

    enhancement 
    opened by Ramsonjehu 4
  • Sizes of tensors must match except in dimension 1. Expected size 26 but got size 25 for tensor number 1 in the list.

    Sizes of tensors must match except in dimension 1. Expected size 26 but got size 25 for tensor number 1 in the list.

    When loading a model from torch.hub I am getting the following error: Sizes of tensors must match except in dimension 1. Expected size 26 but got size 25 for tensor number 1 in the list

    Minimal working example:

    import torch
    from rfa_toolbox import create_graph_from_pytorch_model, visualize_architecture
    
    model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
    graph = create_graph_from_pytorch_model(model)
    

    Please see for more info: issues/6455

    bug 
    opened by Michelvl92 3
  • `width != height` support [FR]

    `width != height` support [FR]

    Thanks for your work.

    It'd be helpful for r to take on input's dimensionality - i.e. measure receptive field of height and width separately, in case strides and kernel sizes aren't equal throughout the network. The current workaround is to move the dimension of interest to be the first - so if (width, height) = (100, 200), we do (200, 100) and swap all network parameters.

    enhancement 
    opened by OverLordGoldDragon 3
  • Update pre-commit hook asottile/pyupgrade to v2.38.0

    Update pre-commit hook asottile/pyupgrade to v2.38.0

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | asottile/pyupgrade | repository | minor | v2.31.1 -> v2.38.0 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    asottile/pyupgrade

    v2.38.0

    Compare Source

    v2.37.3

    Compare Source

    v2.37.2

    Compare Source

    v2.37.1

    Compare Source

    v2.37.0

    Compare Source

    v2.36.0

    Compare Source

    v2.35.0

    Compare Source

    v2.34.0

    Compare Source

    v2.33.0

    Compare Source

    v2.32.1

    Compare Source

    v2.32.0

    Compare Source


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Enabled.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, click this checkbox.

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 2
  • Pooling Layer of PyTorch functional result in wrong graph

    Pooling Layer of PyTorch functional result in wrong graph

    Describe the bug

    The issue arises in complex architectures like InceptionV3, when functional pooling layers are used in a module that has multiple layer processed in parallel. In this case, the graph representation is incorrect.

    To Reproduce Steps to reproduce the behavior:

    import torchvision
    from rfa_toolbox import create_graph_from_pytorch_model, toggle_coerce_torch_functional
    
    # disable the raise condition and treat all functional layers as convolutional layers with kernel_size=3 and stride_size=2
    toggle_coerce_torch_functional(True, kernel_size=3, stride_size=2)
    model = torchvision.models.inceptionv3()
    graph = create_graph_from_pytorch_model(model)
    visualize_architecture(graph, "inceptionv3", input_res=32).view()
    

    Additional context To avoid people making false assumption due to this bug, this is currently classified as a raise-Condition and will crash the graph-creation if not actively disabled, like in the example code.

    This bug can be avoided easily by not using pooling-layers from torch.functional and instead use the object-equivalents in torch.nn

    bug 
    opened by MLRichter 2
  • Update relekang/python-semantic-release action to v7.32.0

    Update relekang/python-semantic-release action to v7.32.0

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | relekang/python-semantic-release | action | minor | v7.31.4 -> v7.32.0 |


    Release Notes

    relekang/python-semantic-release

    v7.32.0

    Compare Source

    Feature
    • Add setting for enforcing textual changelog sections (#​502) (988437d)
    Documentation
    • Correct documented default behaviour for commit_version_number (#​497) (ffae2dc)

    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Enabled.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, click this checkbox.

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update pre-commit hook commitizen-tools/commitizen to v2.35.0

    Update pre-commit hook commitizen-tools/commitizen to v2.35.0

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | commitizen-tools/commitizen | repository | minor | v2.34.0 -> v2.35.0 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    commitizen-tools/commitizen

    v2.35.0

    Compare Source

    Feat
    • allow fixup! and squash! in commit messages

    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Enabled.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, click this checkbox.

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update pre-commit/action action to v3

    Update pre-commit/action action to v3

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | pre-commit/action | action | major | v2.0.3 -> v3.0.0 |


    Release Notes

    pre-commit/action

    v3.0.0

    Compare Source

    Breaking

    see README for alternatives


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, click this checkbox.

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update pre-commit hook PyCQA/flake8 to v5 - autoclosed

    Update pre-commit hook PyCQA/flake8 to v5 - autoclosed

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | PyCQA/flake8 | repository | major | 4.0.1 -> 5.0.4 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    PyCQA/flake8

    v5.0.4

    Compare Source

    v5.0.3

    Compare Source

    v5.0.2

    Compare Source

    v5.0.1

    Compare Source

    v5.0.0

    Compare Source


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update pre-commit hook PyCQA/flake8 to v6

    Update pre-commit hook PyCQA/flake8 to v6

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | PyCQA/flake8 | repository | major | 4.0.1 -> 6.0.0 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    PyCQA/flake8

    v6.0.0

    Compare Source

    v5.0.4

    Compare Source

    v5.0.3

    Compare Source

    v5.0.2

    Compare Source

    v5.0.1

    Compare Source

    v5.0.0

    Compare Source


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update dependency flake8 to v6

    Update dependency flake8 to v6

    Mend Renovate

    This PR contains the following updates:

    | Package | Change | Age | Adoption | Passing | Confidence | |---|---|---|---|---|---| | flake8 (changelog) | ^5.0.0 -> ^6.0.0 | age | adoption | passing | confidence |


    Release Notes

    pycqa/flake8

    v6.0.0

    Compare Source


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 0
  • Update pre-commit hook pre-commit/pre-commit-hooks to v4.4.0

    Update pre-commit hook pre-commit/pre-commit-hooks to v4.4.0

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | pre-commit/pre-commit-hooks | repository | minor | v4.3.0 -> v4.4.0 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    pre-commit/pre-commit-hooks

    v4.4.0

    Compare Source


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Enabled.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 0
  • Update pre-commit hook commitizen-tools/commitizen to v2.37.1

    Update pre-commit hook commitizen-tools/commitizen to v2.37.1

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | commitizen-tools/commitizen | repository | minor | v2.35.0 -> v2.37.1 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    commitizen-tools/commitizen

    v2.37.1

    Compare Source

    Fix
    • changelog: allow rev range lookups without a tag format

    v2.37.0

    Compare Source

    Feat
    • add major-version-zero option to support initial package development

    v2.36.0

    Compare Source

    Feat
    • scripts: remove venv/bin/
    • scripts: add error message to test
    Fix
    • scripts/test: MinGW64 workaround
    • scripts/test: use double-quotes
    • scripts: pydocstyle and cz
    • bump.py: use sys.stdin.isatty()
    • scripts: use cross-platform POSIX
    • scripts: use portable shebang
    • pythonpackage.yml: undo indent reformatting
    • pythonpackage.yml: use bash

    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Enabled.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update relekang/python-semantic-release action to v7.32.2

    Update relekang/python-semantic-release action to v7.32.2

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | relekang/python-semantic-release | action | patch | v7.32.0 -> v7.32.2 |


    Release Notes

    relekang/python-semantic-release

    v7.32.2

    Compare Source

    Fix
    Documentation

    v7.32.1

    Compare Source

    Fix
    Documentation

    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Enabled.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update pre-commit hook asottile/pyupgrade to v3

    Update pre-commit hook asottile/pyupgrade to v3

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | asottile/pyupgrade | repository | major | v2.31.1 -> v3.3.1 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    asottile/pyupgrade

    v3.3.1

    Compare Source

    v3.3.0

    Compare Source

    v3.2.3

    Compare Source

    v3.2.2

    Compare Source

    v3.2.1

    Compare Source

    v3.2.0

    Compare Source

    v3.1.0

    Compare Source

    v3.0.0

    Compare Source

    v2.38.4

    Compare Source

    v2.38.3

    Compare Source

    v2.38.2

    Compare Source

    v2.38.1

    Compare Source

    v2.38.0

    Compare Source

    v2.37.3

    Compare Source

    v2.37.2

    Compare Source

    v2.37.1

    Compare Source

    v2.37.0

    Compare Source

    v2.36.0

    Compare Source

    v2.35.0

    Compare Source

    v2.34.0

    Compare Source

    v2.33.0

    Compare Source

    v2.32.1

    Compare Source

    v2.32.0

    Compare Source


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
Releases(v1.7.0)
ExCon: Explanation-driven Supervised Contrastive Learning

ExCon: Explanation-driven Supervised Contrastive Learning Link to the paper: https://arxiv.org/pdf/2111.14271.pdf Contributors of this repo: Zhibo Zha

Zhibo (Darren) Zhang 18 Nov 01, 2022
Linear algebra python - Number of operations and problems in Linear Algebra and Numerical Linear Algebra

Linear algebra in python Number of operations and problems in Linear Algebra and

Alireza 5 Oct 09, 2022
The code is for the paper "A Self-Distillation Embedded Supervised Affinity Attention Model for Few-Shot Segmentation"

SD-AANet The code is for the paper "A Self-Distillation Embedded Supervised Affinity Attention Model for Few-Shot Segmentation" [arxiv] Overview confi

cv516Buaa 9 Nov 07, 2022
Python scripts form performing stereo depth estimation using the HITNET model in Tensorflow Lite.

TFLite-HITNET-Stereo-depth-estimation Python scripts form performing stereo depth estimation using the HITNET model in Tensorflow Lite. Stereo depth e

Ibai Gorordo 22 Oct 20, 2022
Nest - A flexible tool for building and sharing deep learning modules

Nest - A flexible tool for building and sharing deep learning modules Nest is a flexible deep learning module manager, which aims at encouraging code

ZhouYanzhao 41 Oct 10, 2022
Semi-Autoregressive Transformer for Image Captioning

Semi-Autoregressive Transformer for Image Captioning Requirements Python 3.6 Pytorch 1.6 Prepare data Please use git clone --recurse-submodules to clo

YE Zhou 23 Dec 09, 2022
Script that attempts to force M1 macs into RGB mode when used with monitors that are defaulting to YPbPr.

fix_m1_rgb Script that attempts to force M1 macs into RGB mode when used with monitors that are defaulting to YPbPr. No warranty provided for using th

Kevin Gao 116 Jan 01, 2023
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness Code for Paper "Imbalanced Gradients: A Subtle Cause of Overestimated Adv

Hanxun Huang 11 Nov 30, 2022
Some useful blender add-ons for SMPL skeleton's poses and global translation.

Blender add-ons for SMPL skeleton's poses and trans There are two blender add-ons for SMPL skeleton's poses and trans.The first is for making an offli

犹在镜中 154 Jan 04, 2023
A ssl analyzer which could analyzer target domain's certificate.

ssl_analyzer A ssl analyzer which could analyzer target domain's certificate. Analyze the domain name ssl certificate information according to the inp

vincent 17 Dec 12, 2022
Code for paper "Vocabulary Learning via Optimal Transport for Neural Machine Translation"

**Codebase and data are uploaded in progress. ** VOLT(-py) is a vocabulary learning codebase that allows researchers and developers to automaticaly ge

416 Jan 09, 2023
pyspark🍒🥭 is delicious,just eat it!😋😋

如何用10天吃掉pyspark? 🔥 🔥 《10天吃掉那只pyspark》 🚀

lyhue1991 578 Dec 30, 2022
Multi Task Vision and Language

12-in-1: Multi-Task Vision and Language Representation Learning Please cite the following if you use this code. Code and pre-trained models for 12-in-

Facebook Research 712 Dec 19, 2022
AdaFocus (ICCV 2021) Adaptive Focus for Efficient Video Recognition

AdaFocus (ICCV 2021) This repo contains the official code and pre-trained models for AdaFocus. Adaptive Focus for Efficient Video Recognition Referenc

Rainforest Wang 115 Dec 21, 2022
Python Fanduel API (2021) - Lineup Automation

Southpaw is a python package that provides access to the Fanduel API. Optimize your DFS experience by programmatically updating your lineups, analyzin

Brandin Canfield 13 Jan 04, 2023
Official code for Next Check-ins Prediction via History and Friendship on Location-Based Social Networks (MDM 2018)

MUC Next Check-ins Prediction via History and Friendship on Location-Based Social Networks (MDM 2018) Performance Details for Accuracy: | Dataset

Yijun Su 3 Oct 09, 2022
This is a official repository of SimViT.

SimViT This is a official repository of SimViT. We will open our models and codes about object detection and semantic segmentation soon. Our code refe

ligang 57 Dec 15, 2022
An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicity.

Fast Face Classification (F²C) This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicit

33 Jun 27, 2021
Deep Inertial Prediction (DIPr)

Deep Inertial Prediction For more information and context related to this repo, please refer to our website. Getting Started (non Docker) Note: you wi

Arcturus Industries 12 Nov 11, 2022
Easy to use Python camera interface for NVIDIA Jetson

JetCam JetCam is an easy to use Python camera interface for NVIDIA Jetson. Works with various USB and CSI cameras using Jetson's Accelerated GStreamer

NVIDIA AI IOT 358 Jan 02, 2023