Trustworthy AI related projects

Overview

Trustworthy AI

This repository aims to include trustworthy AI related projects from Huawei Noah's Ark Lab.
Current projects include:

Causal Structure Learning

  • Causal_Discovery_RL: code, datasets, and training logs of the experimental results for the paper 'Causal discovery with reinforcement learning', ICLR, 2020. (oral)
  • GAE_Causal_Structure_Learning: an implementation for 'A graph autoencoder approach to causal structure learning', NeurIPS Causal Machine Learning Workshop, 2019.
  • Datasets:
    • Synthetic datasets: codes for generating synthetic datasets used in the paper.
    • Real datasets: a very challenging real dataset where the objective is to find causal structures based on time series data. The true graph is obtained from expert knowledge. We welcome everyone to try this dataset and report the result!
  • We will also release the codes for other gradient-based causal structure learning methods.

Causal Disentangled Representation Learning

gCastle

  • This is a causal structure learning toolchain, which contains various functionality related to causal learning and evaluation.
  • Most of causal discovery algorithms in gCastle are gradient-based, hence the name: gradient-based Causal Structure Learning pipeline.
Comments
  • Detailed questions about CausalVAE intervention

    Detailed questions about CausalVAE intervention

    Hello, how did CausalVAE intervention operation? In the published source code, intervention is to forcibly assign a concept to a fixed integer by using "mask_z", as follows:

    z_mask = torch.ones(q_m.size()[0], self.z1_dim, self.z2_dim).to(device) * adj       
    decode_m[:, mask, :] = z_mask[:, mask, :]                       
    decode_v[:, mask, :] = z_mask[:, mask, :]    
    

    In the test result, the result is very poor. Where did I make a mistake?

    The original input image is this true_0

    Output image with intervention of 1 for the 0th concept reconstructed_image_0_0

    Should I modify the published code to achieve the effect in the paper?

    opened by EternityZY 8
  • Can't successfully install CAM package

    Can't successfully install CAM package

    I download the CAM_1.0.tar.gz and run the setup_CAM.py, but when check if CAM and mboost have been installed, get 'need to install CAM and mboost'. I can't install CAM successfully. image

    opened by smile0925 7
  • Questions about CausalVAE

    Questions about CausalVAE

    In CausalVAE code Is there any special reason why using 4 different decoders?

    Besides, for pendulum and flow datasets, the latent space dimension should be equal to the dimension of the label, which is 4. Why the latent space dimension in code is set to be 16?

    I checked the Appendix of the paper: we extend the multivariate Gaussian to the matrix Gaussian. Why the model is designed in this way and what are the advantages if we set the VAE in this way?

    Many thanks for your response

    opened by akdadewrwr 6
  • CAUSAL DISCOVERY WITH REINFORCEMENT LEARNING

    CAUSAL DISCOVERY WITH REINFORCEMENT LEARNING

    On the synthetic dataset, there is no error in the execution result. On the sachs dataset, the code error is as follows: Traceback (most recent call last): File "main.py", line 314, in main() File "main.py", line 284, in main graph_batch_pruned = np.transpose(pruning_cam(training_set.inputdata, np.array(graph_batch).T)) File "D:\code\trustworthyAI-master\Causal_Structure_Learning\Causal_Discovery_RL\src\helpers\cam_with_pruning_cam.py", line 129, in pruning_cam X2 = numpy2ri.py2rpy(XX) AttributeError: module 'rpy2.robjects.numpy2ri' has no attribute 'py2rpy' Please help me how to solve this problem?

    opened by cczhangy 5
  • 《CAUSAL DISCOVERY WITH REINFORCEMENT LEARNING》

    《CAUSAL DISCOVERY WITH REINFORCEMENT LEARNING》

    When experimenting with the real dataset Sachs, the real dataset has been downloaded from the website, should I get the DAG.npy and data.npy of the real dataset?

    opened by cczhangy 5
  • HPCI Implementation

    HPCI Implementation

    Hi,

    First of all, thank you for all your work on this package, collecting together the distributed (and sometimes un-implemented) research on causal inference is infinitely helpful and useful.

    I was just wondering if you had any rough ETA on the implementation of the HPC algorithm within your toolbox? The results of this paper look extremely promising and I would like to test them further.

    Many thanks

    opened by gwinch97 4
  • [Feature request]: Adding GES algorithm to the package

    [Feature request]: Adding GES algorithm to the package

    Hi,

    I think it would be great to add GES algorithm implementation to gcastle. It would make broad comparisons between algorithms easier.

    There is an existing Python implementation of GES by Juan Gamella: https://github.com/juangamella/ges

    Maybe it could be integrated into gcastle. What are your thoughts?

    If you think it's a good idea, I am happy to help with the integration.

    BTW., I'll be speaking about gcastle in my upcoming conference talk next week: https://ghostday.pl/#agenda

    opened by AlxndrMlk 4
  • No module named 'castle' after installed the gcastle == 1.0.3rc2

    No module named 'castle' after installed the gcastle == 1.0.3rc2

    Hi I already installed gcastle with pip install gcastle==1.0.3rc2. But when I try to run the demo program like anm_demo.py. There still an error said from castle.common import GraphDAG ModuleNotFoundError: No module named 'castle'

    Anyone know why there is a ModuleNotFoundError even I have installed the package already?

    opened by matthewmeng 4
  • Bump tensorflow from 1.13.1 to 2.5.1 in /Causal_Structure_Learning/GAE_Causal_Structure_Learning

    Bump tensorflow from 1.13.1 to 2.5.1 in /Causal_Structure_Learning/GAE_Causal_Structure_Learning

    Bumps tensorflow from 1.13.1 to 2.5.1.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.5.1

    Release 2.5.1

    This release introduces several vulnerability fixes:

    • Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
    • Fixes a floating point exception in SparseDenseCwiseDiv (CVE-2021-37636)
    • Fixes a null pointer dereference in CompressElement (CVE-2021-37637)
    • Fixes a null pointer dereference in RaggedTensorToTensor (CVE-2021-37638)
    • Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
    • Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
    • Fixes a division by 0 in ResourceScatterDiv (CVE-2021-37642)
    • Fixes a heap OOB in RaggedGather (CVE-2021-37641)
    • Fixes a std::abort raised from TensorListReserve (CVE-2021-37644)
    • Fixes a null pointer dereference in MatrixDiagPartOp (CVE-2021-37643)
    • Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
    • Fixes a bad allocation error in StringNGrams caused by integer conversion (CVE-2021-37646)
    • Fixes a null pointer dereference in SparseTensorSliceDataset (CVE-2021-37647)
    • Fixes an incorrect validation of SaveV2 inputs (CVE-2021-37648)
    • Fixes a null pointer dereference in UncompressElement (CVE-2021-37649)
    • Fixes a segfault and a heap buffer overflow in {Experimental,}DatasetToTFRecord (CVE-2021-37650)
    • Fixes a heap buffer overflow in FractionalAvgPoolGrad (CVE-2021-37651)
    • Fixes a use after free in boosted trees creation (CVE-2021-37652)
    • Fixes a division by 0 in ResourceGather (CVE-2021-37653)
    • Fixes a heap OOB and a CHECK fail in ResourceGather (CVE-2021-37654)
    • Fixes a heap OOB in ResourceScatterUpdate (CVE-2021-37655)
    • Fixes an undefined behavior arising from reference binding to nullptr in RaggedTensorToSparse (CVE-2021-37656)
    • Fixes an undefined behavior arising from reference binding to nullptr in MatrixDiagV* ops (CVE-2021-37657)
    • Fixes an undefined behavior arising from reference binding to nullptr in MatrixSetDiagV* ops (CVE-2021-37658)
    • Fixes an undefined behavior arising from reference binding to nullptr and heap OOB in binary cwise ops (CVE-2021-37659)
    • Fixes a division by 0 in inplace operations (CVE-2021-37660)
    • Fixes a crash caused by integer conversion to unsigned (CVE-2021-37661)
    • Fixes an undefined behavior arising from reference binding to nullptr in boosted trees (CVE-2021-37662)
    • Fixes a heap OOB in boosted trees (CVE-2021-37664)
    • Fixes vulnerabilities arising from incomplete validation in QuantizeV2 (CVE-2021-37663)
    • Fixes vulnerabilities arising from incomplete validation in MKL requantization (CVE-2021-37665)
    • Fixes an undefined behavior arising from reference binding to nullptr in RaggedTensorToVariant (CVE-2021-37666)
    • Fixes an undefined behavior arising from reference binding to nullptr in unicode encoding (CVE-2021-37667)
    • Fixes an FPE in tf.raw_ops.UnravelIndex (CVE-2021-37668)
    • Fixes a crash in NMS ops caused by integer conversion to unsigned (CVE-2021-37669)
    • Fixes a heap OOB in UpperBound and LowerBound (CVE-2021-37670)
    • Fixes an undefined behavior arising from reference binding to nullptr in map operations (CVE-2021-37671)
    • Fixes a heap OOB in SdcaOptimizerV2 (CVE-2021-37672)
    • Fixes a CHECK-fail in MapStage (CVE-2021-37673)
    • Fixes a vulnerability arising from incomplete validation in MaxPoolGrad (CVE-2021-37674)
    • Fixes an undefined behavior arising from reference binding to nullptr in shape inference (CVE-2021-37676)
    • Fixes a division by 0 in most convolution operators (CVE-2021-37675)
    • Fixes vulnerabilities arising from missing validation in shape inference for Dequantize (CVE-2021-37677)
    • Fixes an arbitrary code execution due to YAML deserialization (CVE-2021-37678)
    • Fixes a heap OOB in nested tf.map_fn with RaggedTensors (CVE-2021-37679)

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.5.1

    This release introduces several vulnerability fixes:

    • Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
    • Fixes a floating point exception in SparseDenseCwiseDiv (CVE-2021-37636)
    • Fixes a null pointer dereference in CompressElement (CVE-2021-37637)
    • Fixes a null pointer dereference in RaggedTensorToTensor (CVE-2021-37638)
    • Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
    • Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
    • Fixes a division by 0 in ResourceScatterDiv (CVE-2021-37642)
    • Fixes a heap OOB in RaggedGather (CVE-2021-37641)
    • Fixes a std::abort raised from TensorListReserve (CVE-2021-37644)
    • Fixes a null pointer dereference in MatrixDiagPartOp (CVE-2021-37643)
    • Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
    • Fixes a bad allocation error in StringNGrams caused by integer conversion (CVE-2021-37646)
    • Fixes a null pointer dereference in SparseTensorSliceDataset (CVE-2021-37647)
    • Fixes an incorrect validation of SaveV2 inputs (CVE-2021-37648)
    • Fixes a null pointer dereference in UncompressElement (CVE-2021-37649)
    • Fixes a segfault and a heap buffer overflow in {Experimental,}DatasetToTFRecord (CVE-2021-37650)
    • Fixes a heap buffer overflow in FractionalAvgPoolGrad (CVE-2021-37651)
    • Fixes a use after free in boosted trees creation (CVE-2021-37652)
    • Fixes a division by 0 in ResourceGather (CVE-2021-37653)
    • Fixes a heap OOB and a CHECK fail in ResourceGather (CVE-2021-37654)
    • Fixes a heap OOB in ResourceScatterUpdate (CVE-2021-37655)
    • Fixes an undefined behavior arising from reference binding to nullptr in RaggedTensorToSparse

    ... (truncated)

    Commits
    • 8222c1c Merge pull request #51381 from tensorflow/mm-fix-r2.5-build
    • d584260 Disable broken/flaky test
    • f6c6ce3 Merge pull request #51367 from tensorflow-jenkins/version-numbers-2.5.1-17468
    • 3ca7812 Update version numbers to 2.5.1
    • 4fdf683 Merge pull request #51361 from tensorflow/mm-update-relnotes-on-r2.5
    • 05fc01a Put CVE numbers for fixes in parentheses
    • bee1dc4 Update release notes for the new patch release
    • 47beb4c Merge pull request #50597 from kruglov-dmitry/v2.5.0-sync-abseil-cmake-bazel
    • 6f39597 Merge pull request #49383 from ashahab/abin-load-segfault-r2.5
    • 0539b34 Merge pull request #48979 from liufengdb/r2.5-cherrypick
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 4
  • Experimental performance problem on exp3(GPR simulate data)

    Experimental performance problem on exp3(GPR simulate data)

    Hello! Thank you for the the implementation of the method proposed in paper published in ICLR2020. I have some questions when reproducing the experimental results.

    Our commands is the same as the 'exp3' in README.md . And the data we used comes from 'https://github.com/kurowasan/GraN-DAG/blob/master/data/data_p10_e40_n1000_GP.zip', which should be the same as yours. But the performance is less than the results presented in the paper.

    I'd appreciate it if you could tell me where the problem is.Thanks~

    # Our command:
    python main.py --max_length 10 --data_size 1000 --score_type BIC --reg_type GPR --read_data  --normalize --data_path ./data/data_p10_e40_n1000_GP_seed1 --lambda_flag_default --nb_epoch 20000 --input_dimension 128 --lambda_iter_num 1000
    
    # training log:
    2020-09-22 02:54:35,182 INFO - __main__ - [iter 18000] reward_batch: -4.175537586212158, max_reward: -3.6388002559621855, max_reward_batch: -3.7585916127016294
    2020-09-22 02:56:12,809 INFO - __main__ - [iter 18500] reward_batch: -4.152896881103516, max_reward: -3.6388002559621855, max_reward_batch: -3.823972058605565
    2020-09-22 02:58:37,596 INFO - __main__ - [iter 19000] lambda1 3.6388002559621855, upper 3.6388002559621855, lambda2 0.01, upper 0.01, score_min 3.6388002559621855, cyc_min 0.0
    2020-09-22 02:58:40,635 INFO - __main__ - before pruning: fdr 0.7727272727272727, tpr 0.2564102564102564, fpr 5.666666666666667, shd 35, nnz 44
    2020-09-22 02:58:40,636 INFO - __main__ - after  pruning: fdr 0.84375, tpr 0.1282051282051282, fpr 4.5, shd 37, nnz 32
    2020-09-22 02:58:40,806 INFO - __main__ - [iter 19000] reward_batch: -4.204100608825684, max_reward: -3.6388002559621855, max_reward_batch: -3.7716216965236127
    2020-09-22 03:00:17,209 INFO - __main__ - [iter 19500] reward_batch: -4.158496856689453, max_reward: -3.6388002559621855, max_reward_batch: -3.8702693792409377
    2020-09-22 03:02:40,728 INFO - __main__ - [iter 20000] lambda1 3.6388002559621855, upper 3.6388002559621855, lambda2 0.01, upper 0.01, score_min 3.6388002559621855, cyc_min 0.0
    2020-09-22 03:02:43,523 INFO - __main__ - before pruning: fdr 0.7727272727272727, tpr 0.2564102564102564, fpr 5.666666666666667, shd 35, nnz 44
    2020-09-22 03:02:43,523 INFO - __main__ - after  pruning: fdr 0.84375, tpr 0.1282051282051282, fpr 4.5, shd 37, nnz 32
    2020-09-22 03:02:43,733 INFO - __main__ - [iter 20000] reward_batch: -4.166861534118652, max_reward: -3.6388002559621855, max_reward_batch: -3.723014876623765
    2020-09-22 03:02:45,956 INFO - __main__ - Model saved in file: output/2020-09-22_00-52-32-724/model/tmp.ckpt-20000
    2020-09-22 03:02:45,957 INFO - __main__ - Training COMPLETED !
    
    opened by pp-payphone 4
  • self.loss1 = tf.reduce_mean(self.reward_baseline * self.log_softmax, 0) - 1 * self.lr1 * tf.reduce_mean(self.entropy_regularization, 0)

    self.loss1 = tf.reduce_mean(self.reward_baseline * self.log_softmax, 0) - 1 * self.lr1 * tf.reduce_mean(self.entropy_regularization, 0)

    Hello! I have read all of your code and the great paper published in ICLR2020. I have some questions about the code: 1 . in train step1, why the loss1 of the actor has to be: self.loss1 = tf.reduce_mean(self.reward_baseline * self.log_softmax, 0) - 1 * self.lr1 * tf.reduce_mean(self.entropy_regularization, 0) i didn't find any formula or explanation in the paper. 2. What does reward base line in this code use for? I would be appreciated if you can answer these questions.

    opened by Foyn 4
  • Adding NPVAR algorithm to the package

    Adding NPVAR algorithm to the package

    opened by AlxndrMlk 4
  • task of data generation

    task of data generation

    Hi. When I use the GUI web for the task of data generation. I found that the edges in the graph are not equal to those in the configuration parameters. When I change the seed while keeping the n_nodes and n_edges identical, the edges in the graph may also be changed. So What's the effect of seed? Thanks.

    opened by sususnow 1
  • error in convert_graph_int_to_adj_mat

    error in convert_graph_int_to_adj_mat

    Hello,

    I am trying to use Causal Disocvery RL on bnlearn benchmarks. I encounter the error in convert_graph_int_to_adj_mat function.

    The input to this function is

    [-1903318164   235405414   101482606   495790951   201853294   378349935
     -1634426101 -1718146065   134742090          64   134742086   134742084
       446475107   470616428 -1785775892 -1768316434   201884524   134217728
       201949548 -1903613075   470286702   101187694 -1734505621   503843118
     -2070074547   134217838   513518542   503875886   235405386   445754223
               0      524358   236432367   134742086   134217792   134217792
       503908622]
    

    And the error message follows:

    Traceback (most recent call last):
      File "main.py", line 337, in <module>
        main()
      File "main.py", line 285, in main
        graph_batch = convert_graph_int_to_adj_mat(graph_int)
      File "/home/user/Causal_Discovery_RL/src/helpers/analyze_utils.py", line 156, in convert_graph_int_to_adj_mat
        for curr_int in graph_int], dtype=int)
      File "/home/user/Causal_Discovery_RL/src/helpers/analyze_utils.py", line 156, in <listcomp>
        for curr_int in graph_int], dtype=int)
    ValueError: invalid literal for int() with base 10: '-'
    
    opened by pckennethma 7
Releases(1.0.1)
Owner
HUAWEI Noah's Ark Lab
Working with and contributing to the open source community in data mining, artificial intelligence, and related fields.
HUAWEI Noah's Ark Lab
PyTorch wrapper for Taichi data-oriented class

Stannum PyTorch wrapper for Taichi data-oriented class PRs are welcomed, please see TODOs. Usage from stannum import Tin import torch data_oriented =

86 Dec 23, 2022
(CVPR 2022) Pytorch implementation of "Self-supervised transformers for unsupervised object discovery using normalized cut"

(CVPR 2022) TokenCut Pytorch implementation of Tokencut: Self-supervised Transformers for Unsupervised Object Discovery using Normalized Cut Yangtao W

YANGTAO WANG 200 Jan 02, 2023
OREO: Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning (NeurIPS 2021)

OREO: Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning (NeurIPS 2021) Video demo We here provide a video demo from co

20 Nov 25, 2022
Codebase for BMVC 2021 paper "Text Based Person Search with Limited Data"

Text Based Person Search with Limited Data This is the codebase for our BMVC 2021 paper. Please bear with me refactoring this codebase after CVPR dead

Xiao Han 33 Nov 24, 2022
Prososdy Morph: A python library for manipulating pitch and duration in an algorithmic way, for resynthesizing speech.

ProMo (Prosody Morph) Questions? Comments? Feedback? Chat with us on gitter! A library for manipulating pitch and duration in an algorithmic way, for

Tim 71 Jan 02, 2023
Implement some metaheuristics and cost functions

Metaheuristics This repot implement some metaheuristics and cost functions. Metaheuristics JAYA Implement Jaya optimizer without constraints. Cost fun

Adri1G 1 Mar 23, 2022
PyGAD, a Python 3 library for building the genetic algorithm and training machine learning algorithms (Keras & PyTorch).

PyGAD: Genetic Algorithm in Python PyGAD is an open-source easy-to-use Python 3 library for building the genetic algorithm and optimizing machine lear

Ahmed Gad 1.1k Dec 26, 2022
Technical Analysis Indicators - Pandas TA is an easy to use Python 3 Pandas Extension with 130+ Indicators

Pandas TA - A Technical Analysis Library in Python 3 Pandas Technical Analysis (Pandas TA) is an easy to use library that leverages the Pandas package

Kevin Johnson 3.2k Jan 09, 2023
The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding.

SuperGen The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding. Requirements Before running, you

Yu Meng 38 Dec 12, 2022
Code and Data for the paper: Molecular Contrastive Learning with Chemical Element Knowledge Graph [AAAI 2022]

Knowledge-enhanced Contrastive Learning (KCL) Molecular Contrastive Learning with Chemical Element Knowledge Graph [ AAAI 2022 ]. We construct a Chemi

Fangyin 58 Dec 26, 2022
Learning to Initialize Neural Networks for Stable and Efficient Training

GradInit This repository hosts the code for experiments in the paper, GradInit: Learning to Initialize Neural Networks for Stable and Efficient Traini

Chen Zhu 124 Dec 30, 2022
This is a Tensorflow implementation of Learning to See in the Dark in CVPR 2018

Learning-to-See-in-the-Dark This is a Tensorflow implementation of Learning to See in the Dark in CVPR 2018, by Chen Chen, Qifeng Chen, Jia Xu, and Vl

5.3k Jan 01, 2023
Data for "Driving the Herd: Search Engines as Content Influencers" paper

herding_data Data for "Driving the Herd: Search Engines as Content Influencers" paper Dataset description The collection contains 2250 documents, 30 i

0 Aug 17, 2021
Deep Learning to Improve Breast Cancer Detection on Screening Mammography

Shield: This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Deep Learning to Improve Breast

Li Shen 305 Jan 03, 2023
3D ResNet Video Classification accelerated by TensorRT

Activity Recognition TensorRT Perform video classification using 3D ResNets trained on Kinetics-400 dataset and accelerated with TensorRT P.S Click on

Akash James 39 Nov 21, 2022
RATE: Overcoming Noise and Sparsity of Textual Features in Real-Time Location Estimation (CIKM'17)

RATE: Overcoming Noise and Sparsity of Textual Features in Real-Time Location Estimation This is the implementation of RATE: Overcoming Noise and Spar

Yu Zhang 5 Feb 10, 2022
DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time

DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time Introduction This is official implementation for DR-GAN (IEEE TCS

Kang Liao 18 Dec 23, 2022
Mae segmentation - Reproduction of semantic segmentation using masked autoencoder (mae)

ADE20k Semantic segmentation with MAE Getting started Install the mmsegmentation

97 Dec 17, 2022
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

Thalles Silva 1.7k Dec 28, 2022