Training neural models with structured signals.

Overview

Neural Structured Learning in TensorFlow

Neural Structured Learning (NSL) is a new learning paradigm to train neural networks by leveraging structured signals in addition to feature inputs. Structure can be explicit as represented by a graph [1,2,5] or implicit as induced by adversarial perturbation [3,4].

Structured signals are commonly used to represent relations or similarity among samples that may be labeled or unlabeled. Leveraging these signals during neural network training harnesses both labeled and unlabeled data, which can improve model accuracy, particularly when the amount of labeled data is relatively small. Additionally, models trained with samples that are generated by adversarial perturbation have been shown to be robust against malicious attacks, which are designed to mislead a model's prediction or classification.

NSL generalizes to Neural Graph Learning [1] as well as to Adversarial Learning [3]. The NSL framework in TensorFlow provides the following easy-to-use APIs and tools for developers to train models with structured signals:

  • Keras APIs to enable training with graphs (explicit structure) and adversarial perturbations (implicit structure).

  • TF ops and functions to enable training with structure when using lower-level TensorFlow APIs

  • Tools to build graphs and construct graph inputs for training

The NSL framework is designed to be flexible and can be used to train any kind of neural network. For example, feed-forward, convolution, and recurrent neural networks can all be trained using the NSL framework. In addition to supervised and semi-supervised learning (a low amount of supervision), NSL can in theory be generalized to unsupervised learning. Incorporating structured signals is done only during training, so the performance of the serving/inference workflow remains unchanged. Please check out our tutorials for a practical introduction to NSL.

Getting started

You can install the prebuilt NSL pip package by running:

pip install neural-structured-learning

For more detailed instructions on how to install NSL as a package or to build it from source in various environments, please see the installation guide

Note that NSL requires a TensorFlow version of 1.15 or higher. NSL also supports TensorFlow 2.x with the exception of v2.1, which contains a bug that is incompatible with NSL.

Videos and Colab Tutorials

Get a jump-start on NSL by watching our video series on YouTube! It gives a complete overview of the framework as well as discusses several aspects of learning with structured signals.

Overall Framework Natural Graphs Synthetic Graphs Adversarial Learning

We've also created hands-on colab-based tutorials that will allow you to interactively explore NSL. Here are a few:

You can find more examples and tutorials under the examples directory.

Contributing to NSL

Contributions are welcome and highly appreciated - there are several ways to contribute to TF Neural Structured Learning:

  • Case studies: If you are interested in applying NSL, consider wrapping up your usage as a tutorial, a new dataset, or an example model that others could use for experiments and/or development. The examples directory could be a good destination for such contributions.

  • Product excellence: If you are interested in improving NSL's product excellence and developer experience, the best way is to clone this repo, make changes directly on the implementation in your local repo, and then send us pull request to integrate your changes.

  • New algorithms: If you are interested in developing new algorithms for NSL, the best way is to study the implementations of NSL libraries, and to think of extensions to the existing implementation (or alternative approaches). If you have a proposal for a new algorithm, we recommend starting by staging your project in the research directory and including a colab notebook to showcase the new features. If you develop new algorithms in your own repository, we would be happy to feature pointers to academic publications and/or repositories using NSL from this repository.

Please be sure to review the contribution guidelines.

Research

See our research directory for research projects in Neural Structured Learning:

Featured Usage

Please see the usage page to learn more about how NSL is being discussed and used in the open source community.

Issues, Questions, and Feedback

Please use GitHub issues to file issues, bugs, and feature requests. For questions, please direct them to Stack Overflow with the "nsl" tag. For feedback, please fill this form; we would love to hear from you.

Release Notes

Please see the release notes for detailed version updates.

References

[1] T. Bui, S. Ravi and V. Ramavajjala. "Neural Graph Learning: Training Neural Networks Using Graphs." WSDM 2018

[2] T. Kipf and M. Welling. "Semi-supervised classification with graph convolutional networks." ICLR 2017

[3] I. Goodfellow, J. Shlens and C. Szegedy. "Explaining and harnessing adversarial examples." ICLR 2015

[4] T. Miyato, S. Maeda, M. Koyama and S. Ishii. "Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning." ICLR 2016

[5] D. Juan, C. Lu, Z. Li, F. Peng, A. Timofeev, Y. Chen, Y. Gao, T. Duerig, A. Tomkins and S. Ravi "Graph-RISE: Graph-Regularized Image Semantic Embedding." WSDM 2020

Comments
  • Extending Graph regularization to images?

    Extending Graph regularization to images?

    Hi folks.

    I am willing to work on a tutorial that shows how to extend graph regularization example in the same way it's done for text-based problems. Is there a scope for this tutorial inside this repo?

    opened by sayakpaul 29
  • Added CNN adversarial learning tutorial notebook

    Added CNN adversarial learning tutorial notebook

    Hi @arjung as discussed. Please find the adversarial learning notebook example.

    I have added it in the neural_structured_learning/examples/notebooks as you had mentioned. Feel free to suggest the necessary edits and changes to keep in line with Google & TF standards and I will make the necessary changes based on feedback.

    cla: no 
    opened by dipanjanS 16
  • gnn model implementation

    gnn model implementation

    Hi,

    I’m interested in neural structure learning(graph neural network) and want to apply some basic gnn models such as GCN, GAT using Tenserflow2.0. I have some ideas on how to implement it, but I am not sure whether such a model architecture is clear. Here is the code structure of gnn I roughly thought:

    • Create a folder named ‘models’ to design various gnn models structure
    • In ‘models’ folder, create a file named sublayers.py which defines a single graph layer as a class called: GraphConvLayer, GraphAttenLayer, etc…
    • Design base node & edge models to let other gnn models inherit them (increase flexibility, readability)
    • Bulid functions in utils.py(or a folder named utils) such as evaluate, inference, calculate_loss, etc… to make users more convenience

    About the above structure, I have slightly referred to the repository of Deep Graph Library(dgl). As mentioned before, I want to apply some gnn model application but don’t know if my idea is appropriate. My initial thought is maybe we can create a folder just called gnn-survey-paper or somewhere else to put these implementations of gnn models. This is the first time I tried to post an issue and hope to contribute to the open-source code. If there is anything unclear above or there are some recommendations from your team, just feel free to let me know. Thanks:)

    Best regards, Josh

    question stat:awaiting response 
    opened by joshchang1112 16
  • TypeError: object of type 'AdvRegConfig' has no len()

    TypeError: object of type 'AdvRegConfig' has no len()

    Hi, I'm implementing a Keras binary image classifier using VGG16 with Adversarial Regularization. After initialization of the VGG16 model layers, I'm configuring the Adversarial Regularizer using the following code -

    import neural_structured_learning as nsl
    
    adv_config = nsl.configs.make_adv_reg_config(multiplier=0.2, adv_step_size=0.05)
    adv_model = nsl.keras.AdversarialRegularization(custom_vgg_model, adv_config)
    adv_model.compile(tf.keras.optimizers.SGD(learning_rate=2e-5), loss='categorical_crossentropy', metrics=['accuracy'])
    

    When I execute the code, I'm getting the following error -

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    [<ipython-input-32-bb6bdecb015d>](https://localhost:8080/#) in <module>()
          1 adv_config = nsl.configs.make_adv_reg_config(multiplier=0.2, adv_step_size=0.05)
          2 adv_model = nsl.keras.AdversarialRegularization(custom_vgg_model, adv_config)
    ----> 3 adv_model.compile(tf.keras.optimizers.SGD(learning_rate=2e-5), loss='categorical_crossentropy', metrics=['accuracy'])
    
    2 frames
    [/usr/local/lib/python3.7/dist-packages/neural_structured_learning/keras/adversarial_regularization.py](https://localhost:8080/#) in _build_labeled_losses(self, output_names)
        554       return  # Losses are already populated.
        555 
    --> 556     if len(output_names) != len(self.label_keys):
        557       raise ValueError('The model has different number of outputs and labels. '
        558                        '({} vs. {})'.format(
    
    TypeError: object of type 'AdvRegConfig' has no len()
    

    How do I resolve this issue?

    question 
    opened by kabyanil 12
  • Adding a tutorial for graph regularization with images

    Adding a tutorial for graph regularization with images

    @arjung I reckoned that adding the example under g3docs/tutorials might be more appropriate because it would be the first one that shows how to build a synthetic graph for an image dataset and build a model.

    Let me know your thoughts. A parallel Colab Notebook is available here for the results. For reference, the corresponding issue thread is available here.

    cla: yes 
    opened by sayakpaul 12
  • ValueError in the tutorial on Colab environment

    ValueError in the tutorial on Colab environment

    Hello

    The ValueError error Insufficient elements in branch_graphs[0].outputs. is found in both Neural Graph Learning tutorials (sentiment and document classification).

    Both happen during the training of graph_reg_model.fit() process. The error messages are shown below, I think it is due to the same issue.

    • Graph regularization for document classification using natural graphs
    Epoch 1/100
          1/Unknown - 0s 303ms/step
    
    ---------------------------------------------------------------------------
    
    ValueError                                Traceback (most recent call last)
    
    <ipython-input-20-0c7d19de6181> in <module>()
         10     loss='sparse_categorical_crossentropy',
         11     metrics=['accuracy'])
    ---> 12 graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
    
    25 frames
    
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/ops/cond_v2.py in _make_indexed_slices_indices_types_match(op_type, branch_graphs)
        650                      "Expected: %i\n"
        651                      "Actual: %i" %
    --> 652                      (current_index, len(branch_graphs[0].outputs)))
        653 
        654   # Cast indices with mismatching types to int64.
    
    ValueError: Insufficient elements in branch_graphs[0].outputs.
    Expected: 11
    Actual: 10
    
    • Graph regularization for sentiment classification using synthesized graphs
    Epoch 1/10
          1/Unknown - 2s 2s/step
    
    ---------------------------------------------------------------------------
    
    ValueError                                Traceback (most recent call last)
    
    <ipython-input-30-e49eed0ffe51> in <module>()
          3     validation_data=validation_dataset,
          4     epochs=HPARAMS.train_epochs,
    ----> 5     verbose=1)
    
    25 frames
    
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/ops/cond_v2.py in _make_indexed_slices_indices_types_match(op_type, branch_graphs)
        650                      "Expected: %i\n"
        651                      "Actual: %i" %
    --> 652                      (current_index, len(branch_graphs[0].outputs)))
        653 
        654   # Cast indices with mismatching types to int64.
    
    ValueError: Insufficient elements in branch_graphs[0].outputs.
    Expected: 18
    Actual: 17
    

    Thanks

    bug 
    opened by yenhao 11
  • Generators

    Generators

    Is there a particular format when combining with ImageDataGenerator? Error: OperatorNotAllowedInGraphError: iterating over tf.Tensor is not allowed in Graph execution

    question 
    opened by esparza83 10
  • Adding an implementation of Denoised Smoothing

    Adding an implementation of Denoised Smoothing

    This PR adds Denoised Smoothing under the research folder.

    The success of Randomized Smoothing is proven and it works for many different scenarios. But it also operates under the assumption that the underlying classifier is able to perform well under Gaussian perturbations. Won't it be better if we could just take our standard pre-trained image classifiers (including the Cloud APIs) and have the benefits of Randomized Smoothing inside of them in an easy manner?

    That is preciously what Denoised Smoothing does by prepending a Denoiser to an image classifier and still maintains the theoretical guarantees of robustness against L2 attacks.

    Besides, the implementation includes a suite of utilities that may be helpful to generate robustness certificates. I think that gives us a unique opportunity to actually design an API inside NSL that would allow easy generation of robustness certificates. To the best of my knowledge, there does not exist any such framework as we speak.

    cla: yes 
    opened by sayakpaul 9
  • link prediction?

    link prediction?

    Hi,

    I have been thinking about using NSL in the context of link prediction. It can definitely be reframed as a classification problem I believe. The only thing I am wondering about is if anyone has thought of an elegant way to add the neighbours (I guess in this case of both nodes which form the link ). Has anyone been working on this?

    I guess a decent way of going about it would be changing the parse_example function:

    def parse_example(example_proto):
      """Extracts relevant fields from the `example_proto`.
    
      Args:
        example_proto: An instance of `tf.train.Example`.
    
      Returns:
        A pair whose first value is a dictionary containing relevant features
        and whose second value contains the ground truth labels.
      """
    

    so, that it can take a pair of examples which are part of a link.

    Thanks, George

    question stat:awaiting response 
    opened by ggerogiokas 9
  • Added CNN adversarial learning tutorial notebook [Updated with Feedback]

    Added CNN adversarial learning tutorial notebook [Updated with Feedback]

    Hi @arjung as discussed. Please find the adversarial learning notebook example updated based on feedback from your side.

    You can review it and add in any comments as needed to fix any pending issues. Also would be great if you can close PR: https://github.com/tensorflow/neural-structured-learning/pull/67

    cla: yes 
    opened by dipanjanS 8
  • Adversarial Learning Tutorial Additions

    Adversarial Learning Tutorial Additions

    Zoey and I are partners working on a project to help improve NSL documentation (specifically this tutorial).

    I noticed in online communities and from personal experience, while the corresponding Youtube videos verbally explain concepts well for beginners, the recommended interactive collabs were a bit difficult to follow for beginners. That's why I wanted to better connect the video content with the collab content thus I created a recap section for beginners that includes all the concepts explained in the Youtube video. Also for some, they may prefer reading content over watching, some may also just want to have a written reference integrated into the collab tutorial.

    cla: yes 
    opened by angela-wang1 8
  • Clarification on attacks used in Adversarial Training

    Clarification on attacks used in Adversarial Training

    Hello, I was trying to use NSL to implement adversarial training on my custom model, so I followed the default steps in the tutorial video, which worked like a charm. While studying the code, I noticed that the call to make_adv_reg_config() has a parameter called pgd_epsilon, which is "...Only used in Projected Gradient Descent (PGD) attack".

    This statement suggests that NSL can use different attacks in adversarial training; however, it is not clear how to select which attack to use, or which attack is currently in use. Up till now I had assumed that PGD was being used by default, as this is common in literature, but I would like to know if this is actually the case, and by extension if it is possible to use a different attack and how that can be done.

    Thanks!

    opened by madarax64 4
  • Apply adversarial training after few epochs.

    Apply adversarial training after few epochs.

    I want to train the model without adversarial attack at the first two epochs. After that, I would like to train the model with adversarial learning.

    In summary,

    1 epoch: Training w/o adversarial 2 epoch: Training w/o adversarial 3 epoch: Training with adversarial 4 epoch: Training with adversarial ....

    Is it possible to adjust the start epoch of the adversarial training? I couldn't find any related parameter in nsl.configs.make_adv_reg_config

    opened by jhss 1
  • Adding support for reading and writing to multiple tfrecords in `nsl.tools.pack_nbrs`

    Adding support for reading and writing to multiple tfrecords in `nsl.tools.pack_nbrs`

    The current implementation of nsl.tools.pack_nbrs does not support reading and writing to multiple tfrecord files. Given the extensive optimizations made available by the tf.data API when working with multiple tfrecords, supporting this would yield significant performance gain in distributed training. I would be willing to contribute to this

    Relevant parts of the code

    • for reading https://github.com/tensorflow/neural-structured-learning/blob/c21dad4feff187cdec041a564193ea7b619b8906/neural_structured_learning/tools/pack_nbrs.py#L63-L71
    • for writing https://github.com/tensorflow/neural-structured-learning/blob/c21dad4feff187cdec041a564193ea7b619b8906/neural_structured_learning/tools/pack_nbrs.py#L264-L270
    enhancement 
    opened by srihari-humbarwadi 8
Releases(v1.4.0)
  • v1.4.0(Jul 29, 2022)

    Major Features and Improvements

    • Add params as an optional third argument to the embedding_fn argument of nsl.estimator.add_graph_regularization. This is similar to the params argument of an Estimator's model_fn, which allows users to pass arbitrary states through. Adding this as an argument to embedding_fn will allow users to access that state in the implementation of embedding_fn.
    • Both nsl.keras.AdversarialRegularization and nsl.keras.GraphRegularization now support the save method which will save the base model.
    • nsl.keras.AdversarialRegularization now supports a tf.keras.Sequential base model with a tf.keras.layers.DenseFeatures layer.
    • nsl.configs.AdvNeighborConfig has a new field random_init. If set to True, a random perturbation will be performed before FGSM/PGD steps.
    • nsl.lib.gen_adv_neighbor now has a new parameter use_while_loop. If set to True, the PGD steps are done in a tf.while_loop which is potentially more memory efficient but has some restrictions.
    • New library functions:
      • nsl.lib.random_in_norm_ball for generating random tensors in a norm ball.
      • nsl.lib.project_to_ball for projecting tensors onto a norm ball.

    Bug Fixes and Other Changes

    • Dropped Python 2 support (which was deprecated 2+ years ago).
    • nsl.keras.AdversarialRegularization and nsl.lib.gen_adv_neighbor will not attempt to calculate gradients for tensors with a non-differentiable dtype. This doesn’t change the functionality, but only suppresses excess warnings.
    • Both estimator/adversarial_regularization.py and estimator/graph_regularization.py explicitly import estimator from tensorflow as a separate import instead of accessing it via tf.estimator and depend on the tensorflow estimator target.
    • The new top-level workshops directory contains presentation materials from tutorials we organized on NSL at KDD 2020, WSDM 2021, and WebConf 2021.
    • The new usage.md page describes featured usage of NSL, external talks, blog posts, media coverage, and more.
    • End-to-end examples under the examples directory:
      • New examples about graph neural network modules with graph-regularizer and graph convolution.
      • New README file providing an overview of the examples.
    • New tutorial examples under the examples/notebooks directory:
      • Graph regularization for image classification using synthesized graphs
      • Adversarial Learning: Building Robust Image Classifiers
      • Saving and loading NSL models

    Thanks to our Contributors

    This release contains contributions from many people at Google Research and from TF community members: @angela-wang1 , @dipanjanS, @joshchang1112, @SamuelMarks, @sayakpaul, @wangbingnan136, @zoeyz101

    Source code(tar.gz)
    Source code(zip)
  • v1.3.1(Aug 18, 2020)

    Major Features and Improvements

    None.

    Bug Fixes and Other Changes

    • Fixed the NSL graph builder to ignore lsh_rounds when lsh_splits < 1. By default, the prior version of the graph builder would repeat the work twice by default. In addition, the default value for lsh_rounds has been changed from 2 to 1.
    • Updated the NSL IMDB tutorial to use the new LSH support when building the graph, thereby speeding up the graph building time by ~5x.

    Thanks to our Contributors

    This release contains contributions from many people at Google.

    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Jul 31, 2020)

    Major Features and Improvements

    • Added locality-sensitive hashing (LSH) support to the graph builder tool. This allows the graph builder to scale up to larger input datasets. As part of this change, the new nsl.configs.GraphBuilderConfig class was introduced, as well as a new nsl.tools.build_graph_from_config function. The new parameters for controlling the LSH algorithm are named lsh_rounds and lsh_splits.

    Bug Fixes and Other Changes

    • Changed nsl.tools.add_edge to return a boolean result indicating if a new edge was added or not; previously, this function was not returning any value.
    • Fixed a bug in nsl.tools.read_tsv_graph that was incrementing the reported edge count too often.
    • Removed Python 2 unit tests.
    • Fixed a bug in nsl.estimator.add_adversarial_regularization and nsl.estimator.add_graph_regularization so that the UPDATE_OPS can be triggered correctly.
    • Updated graph-NSL tutorials not to parse neighbor features during evaluation.
    • Added scaled graph and adversarial loss values as scalars to the summary in nsl.estimator.add_graph_regularization and nsl.estimator.add_adversarial_regularization respectively.
    • Updated graph and adversarial regularization loss metrics in nsl.keras.GraphRegularization and nsl.keras.AdversarialRegularization respectively, to include scaled values for consistency with their respective loss term contributions.

    Thanks to our Contributors

    This release contains contributions from many people at Google.

    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Jun 10, 2020)

    Release 1.2.0

    Major Features and Improvements

    • Changed nsl.tools.build_graph(...) to be more efficient and use far less memory. In particular, the memory consumption is now proportional only to the size of the input, not the size of the input plus the size of the output. Since the size of the output can be quadratic in the size of the input, this can lead to large memory savings. nsl.tools.build_graph(...) now also produces a log message every 1M edges it writes to indicate progress.
    • Introduces nsl.lib.strip_neighbor_features, a function to remove graph neighbor features from a feature dictionary.
    • Restricts the expectation of graph neighbor features being present in the input to the training mode for both the Keras and Estimator graph regularization wrappers. So, during evaluation, prediction, etc, neighbor features need not be fed to the model anymore.
    • Change the default value of keep_rank from False to True as well as flip its semantics in nsl.keras.layers.NeighborFeatures.call and nsl.utils.unpack_neighbor_features.
    • Supports feature value constraints for adversarial neighbors. See clip_value_min and clip_value_max in nsl.configs.AdvNeighborConfig.
    • Supports adversarial regularization with PGD in Keras and estimator models.
    • Support for generating adversarial neighbors using Projected Gradient Descent (PGD) via the nsl.lib.adversarial_neighbor.gen_adv_neighbor API.

    Bug Fixes and Other Changes

    • Clarifies the meaning of the nsl.AdvNeighborConfig.feature_mask field.
    • Updates notebooks to avoid invoking the nsl.tools.build_graph and nsl.tools.pack_nbrs utilities as binaries.
    • Replace deprecated API in notebooks when testing for GPU availability.
    • Fix typos in documentation and notebooks.
    • Improvements to example trainers.
    • Fixed the metric string to 'acc' to be compatible with both TF1.x and 2.x.
    • Allow passing dictionaries to sequential base models in adversarial regularization.
    • Supports input feature list in nsl.lib.gen_adv_neighbor.
    • Supports input with a collection of tensors in nsl.lib.maximize_within_unit_norm.
    • Adds an optional parameter base_with_labels_in_features to nsl.keras.AdversarialRegularization for passing label features to the base model.
    • Fixes the tensor ordering issue in nsl.keras.AdversarialRegularization when used with a functional Keras base model.

    Thanks to our Contributors

    This release contains contributions from many people at Google as well as @mzahran001.

    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Oct 15, 2019)

    Release 1.1.0

    Major Features and Improvements

    • Introduces nsl.tools.build_graph, a function for graph building.

    • Introduces nsl.tools.pack_nbrs, a function to prepare input for graph-based NSL.

    • Adds tf.estimator.Estimator support for NSL. In particular, this release introduces two new wrapper functions named nsl.estimator.add_graph_regularization and nsl.estimator.add_adversarial_regularization to wrap existing tf.estimator.Estimator-based models with NSL. These APIs are currently supported only for TF 1.x.

    Bug Fixes and Other Changes

    • Adds version information to the NSL package, which can be queried as nsl.__version__.

    • Fixes loss computation with Loss objects in AdversarialRegularization.

    • Adds a new parameter to nsl.keras.adversarial_loss which can be used to pass additional arguments to the model.

    • Fixes typos in documentation and notebooks.

    • Updates notebooks to use the release version of TF 2.0.

    Thanks to our Contributors

    This release contains contributions from many people at Google.

    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Sep 18, 2019)

    Release 1.0.1

    Major Features and Improvements

    • Adds make_graph_reg_config, a new API to help construct a nsl.configs.GraphRegConfig object

    • Updates the package description on PyPI

    Bug Fixes and Other Changes

    • Fixes metric computation with Metric objects in AdversarialRegularization

    • Fixes typos in documentation and notebooks

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    @joaogui1, @aspratyush.

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Sep 3, 2019)

Semi-Supervised Signed Clustering Graph Neural Network (and Implementation of Some Spectral Methods)

SSSNET SSSNET: Semi-Supervised Signed Network Clustering For details, please read our paper. Environment Setup Overview The project has been tested on

Yixuan He 9 Nov 24, 2022
ROS Basics and TurtleSim

Waypoint Follower Anna Garverick This package draws given waypoints, then waits for a service call with a start position to send the turtle to each wa

Anna Garverick 1 Dec 13, 2021
Pixel-level Crack Detection From Images Of Levee Systems : A Comparative Study

PIXEL-LEVEL CRACK DETECTION FROM IMAGES OF LEVEE SYSTEMS : A COMPARATIVE STUDY G

Manisha Panta 2 Jul 23, 2022
Set of methods to ensemble boxes from different object detection models, including implementation of "Weighted boxes fusion (WBF)" method.

Set of methods to ensemble boxes from different object detection models, including implementation of "Weighted boxes fusion (WBF)" method.

1.4k Jan 05, 2023
PyTorch implementation of the paper Dynamic Data Augmentation with Gating Networks

Dynamic Data Augmentation with Gating Networks This is an official PyTorch implementation of the paper Dynamic Data Augmentation with Gating Networks

九州大学 ヒューマンインタフェース研究室 3 Oct 26, 2022
Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)

Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching Official pytorch implementation of "Show, Attend and Distill: Kn

Clova AI Research 80 Dec 16, 2022
Replication Package for "An Empirical Study of the Effectiveness of an Ensemble of Stand-alone Sentiment Detection Tools for Software Engineering Datasets"

Replication Package for "An Empirical Study of the Effectiveness of an Ensemble of Stand-alone Sentiment Detection Tools for Software Engineering Data

2 Oct 06, 2022
TeachMyAgent is a testbed platform for Automatic Curriculum Learning methods in Deep RL.

TeachMyAgent: a Benchmark for Automatic Curriculum Learning in Deep RL Paper Website Documentation TeachMyAgent is a testbed platform for Automatic Cu

Flowers Team 51 Dec 25, 2022
Human motion synthesis using Unity3D

Human motion synthesis using Unity3D Prerequisite: Software: amc2bvh.exe, Unity 2017, Blender. Unity: RockVR (Video Capture), scenes, character models

Hao Xu 9 Jun 01, 2022
OpenMMLab Image Classification Toolbox and Benchmark

Introduction English | 简体中文 MMClassification is an open source image classification toolbox based on PyTorch. It is a part of the OpenMMLab project. D

OpenMMLab 1.8k Jan 03, 2023
(IEEE TIP 2021) Regularized Densely-connected Pyramid Network for Salient Instance Segmentation

RDPNet IEEE TIP 2021: Regularized Densely-connected Pyramid Network for Salient Instance Segmentation PyTorch training and testing code are available.

Yu-Huan Wu 41 Oct 21, 2022
[ICML 2020] "When Does Self-Supervision Help Graph Convolutional Networks?" by Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen

When Does Self-Supervision Help Graph Convolutional Networks? PyTorch implementation for When Does Self-Supervision Help Graph Convolutional Networks?

Shen Lab at Texas A&M University 106 Nov 11, 2022
(EI 2022) Controllable Confidence-Based Image Denoising

Image Denoising with Control over Deep Network Hallucination Paper and arXiv preprint -- Our frequency-domain insights derive from SFM and the concept

Images and Visual Representation Laboratory (IVRL) at EPFL 5 Dec 18, 2022
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

Pytorch Lightning 21.1k Jan 01, 2023
Embodied Intelligence via Learning and Evolution

Embodied Intelligence via Learning and Evolution This is the code for the paper Embodied Intelligence via Learning and Evolution Agrim Gupta, Silvio S

Agrim Gupta 111 Dec 13, 2022
Torch implementation of "Enhanced Deep Residual Networks for Single Image Super-Resolution"

NTIRE2017 Super-resolution Challenge: SNU_CVLab Introduction This is our project repository for CVPR 2017 Workshop (2nd NTIRE). We, Team SNU_CVLab, (B

Bee Lim 625 Dec 30, 2022
The most simple and minimalistic navigation dashboard.

Navigation This project follows a goal to have simple and lightweight dashboard with different links. I use it to have my own self-hosted service dash

Yaroslav 23 Dec 23, 2022
This repository contains the implementation of Deep Detail Enhancment for Any Garment proposed in Eurographics 2021

Deep-Detail-Enhancement-for-Any-Garment Introduction This repository contains the implementation of Deep Detail Enhancment for Any Garment proposed in

40 Dec 13, 2022
SegTransVAE: Hybrid CNN - Transformer with Regularization for medical image segmentation

SegTransVAE: Hybrid CNN - Transformer with Regularization for medical image segmentation This repo is the official implementation for SegTransVAE. Seg

Nguyen Truong Hai 4 Aug 04, 2022
PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020).

NHDRRNet-PyTorch This is the PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020). 0. Differences between Original Paper and

Yutong Zhang 1 Mar 01, 2022