Trustworthy AI related projects

Overview

Trustworthy AI

This repository aims to include trustworthy AI related projects from Huawei Noah's Ark Lab.
Current projects include:

Causal Structure Learning

  • Causal_Discovery_RL: code, datasets, and training logs of the experimental results for the paper 'Causal discovery with reinforcement learning', ICLR, 2020. (oral)
  • GAE_Causal_Structure_Learning: an implementation for 'A graph autoencoder approach to causal structure learning', NeurIPS Causal Machine Learning Workshop, 2019.
  • Datasets:
    • Synthetic datasets: codes for generating synthetic datasets used in the paper.
    • Real datasets: a very challenging real dataset where the objective is to find causal structures based on time series data. The true graph is obtained from expert knowledge. We welcome everyone to try this dataset and report the result!
  • We will also release the codes for other gradient-based causal structure learning methods.

Causal Disentangled Representation Learning

gCastle

  • This is a causal structure learning toolchain, which contains various functionality related to causal learning and evaluation.
  • Most of causal discovery algorithms in gCastle are gradient-based, hence the name: gradient-based Causal Structure Learning pipeline.
Comments
  • Detailed questions about CausalVAE intervention

    Detailed questions about CausalVAE intervention

    Hello, how did CausalVAE intervention operation? In the published source code, intervention is to forcibly assign a concept to a fixed integer by using "mask_z", as follows:

    z_mask = torch.ones(q_m.size()[0], self.z1_dim, self.z2_dim).to(device) * adj       
    decode_m[:, mask, :] = z_mask[:, mask, :]                       
    decode_v[:, mask, :] = z_mask[:, mask, :]    
    

    In the test result, the result is very poor. Where did I make a mistake?

    The original input image is this true_0

    Output image with intervention of 1 for the 0th concept reconstructed_image_0_0

    Should I modify the published code to achieve the effect in the paper?

    opened by EternityZY 8
  • Can't successfully install CAM package

    Can't successfully install CAM package

    I download the CAM_1.0.tar.gz and run the setup_CAM.py, but when check if CAM and mboost have been installed, get 'need to install CAM and mboost'. I can't install CAM successfully. image

    opened by smile0925 7
  • Questions about CausalVAE

    Questions about CausalVAE

    In CausalVAE code Is there any special reason why using 4 different decoders?

    Besides, for pendulum and flow datasets, the latent space dimension should be equal to the dimension of the label, which is 4. Why the latent space dimension in code is set to be 16?

    I checked the Appendix of the paper: we extend the multivariate Gaussian to the matrix Gaussian. Why the model is designed in this way and what are the advantages if we set the VAE in this way?

    Many thanks for your response

    opened by akdadewrwr 6
  • CAUSAL DISCOVERY WITH REINFORCEMENT LEARNING

    CAUSAL DISCOVERY WITH REINFORCEMENT LEARNING

    On the synthetic dataset, there is no error in the execution result. On the sachs dataset, the code error is as follows: Traceback (most recent call last): File "main.py", line 314, in main() File "main.py", line 284, in main graph_batch_pruned = np.transpose(pruning_cam(training_set.inputdata, np.array(graph_batch).T)) File "D:\code\trustworthyAI-master\Causal_Structure_Learning\Causal_Discovery_RL\src\helpers\cam_with_pruning_cam.py", line 129, in pruning_cam X2 = numpy2ri.py2rpy(XX) AttributeError: module 'rpy2.robjects.numpy2ri' has no attribute 'py2rpy' Please help me how to solve this problem?

    opened by cczhangy 5
  • 《CAUSAL DISCOVERY WITH REINFORCEMENT LEARNING》

    《CAUSAL DISCOVERY WITH REINFORCEMENT LEARNING》

    When experimenting with the real dataset Sachs, the real dataset has been downloaded from the website, should I get the DAG.npy and data.npy of the real dataset?

    opened by cczhangy 5
  • HPCI Implementation

    HPCI Implementation

    Hi,

    First of all, thank you for all your work on this package, collecting together the distributed (and sometimes un-implemented) research on causal inference is infinitely helpful and useful.

    I was just wondering if you had any rough ETA on the implementation of the HPC algorithm within your toolbox? The results of this paper look extremely promising and I would like to test them further.

    Many thanks

    opened by gwinch97 4
  • [Feature request]: Adding GES algorithm to the package

    [Feature request]: Adding GES algorithm to the package

    Hi,

    I think it would be great to add GES algorithm implementation to gcastle. It would make broad comparisons between algorithms easier.

    There is an existing Python implementation of GES by Juan Gamella: https://github.com/juangamella/ges

    Maybe it could be integrated into gcastle. What are your thoughts?

    If you think it's a good idea, I am happy to help with the integration.

    BTW., I'll be speaking about gcastle in my upcoming conference talk next week: https://ghostday.pl/#agenda

    opened by AlxndrMlk 4
  • No module named 'castle' after installed the gcastle == 1.0.3rc2

    No module named 'castle' after installed the gcastle == 1.0.3rc2

    Hi I already installed gcastle with pip install gcastle==1.0.3rc2. But when I try to run the demo program like anm_demo.py. There still an error said from castle.common import GraphDAG ModuleNotFoundError: No module named 'castle'

    Anyone know why there is a ModuleNotFoundError even I have installed the package already?

    opened by matthewmeng 4
  • Bump tensorflow from 1.13.1 to 2.5.1 in /Causal_Structure_Learning/GAE_Causal_Structure_Learning

    Bump tensorflow from 1.13.1 to 2.5.1 in /Causal_Structure_Learning/GAE_Causal_Structure_Learning

    Bumps tensorflow from 1.13.1 to 2.5.1.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.5.1

    Release 2.5.1

    This release introduces several vulnerability fixes:

    • Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
    • Fixes a floating point exception in SparseDenseCwiseDiv (CVE-2021-37636)
    • Fixes a null pointer dereference in CompressElement (CVE-2021-37637)
    • Fixes a null pointer dereference in RaggedTensorToTensor (CVE-2021-37638)
    • Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
    • Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
    • Fixes a division by 0 in ResourceScatterDiv (CVE-2021-37642)
    • Fixes a heap OOB in RaggedGather (CVE-2021-37641)
    • Fixes a std::abort raised from TensorListReserve (CVE-2021-37644)
    • Fixes a null pointer dereference in MatrixDiagPartOp (CVE-2021-37643)
    • Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
    • Fixes a bad allocation error in StringNGrams caused by integer conversion (CVE-2021-37646)
    • Fixes a null pointer dereference in SparseTensorSliceDataset (CVE-2021-37647)
    • Fixes an incorrect validation of SaveV2 inputs (CVE-2021-37648)
    • Fixes a null pointer dereference in UncompressElement (CVE-2021-37649)
    • Fixes a segfault and a heap buffer overflow in {Experimental,}DatasetToTFRecord (CVE-2021-37650)
    • Fixes a heap buffer overflow in FractionalAvgPoolGrad (CVE-2021-37651)
    • Fixes a use after free in boosted trees creation (CVE-2021-37652)
    • Fixes a division by 0 in ResourceGather (CVE-2021-37653)
    • Fixes a heap OOB and a CHECK fail in ResourceGather (CVE-2021-37654)
    • Fixes a heap OOB in ResourceScatterUpdate (CVE-2021-37655)
    • Fixes an undefined behavior arising from reference binding to nullptr in RaggedTensorToSparse (CVE-2021-37656)
    • Fixes an undefined behavior arising from reference binding to nullptr in MatrixDiagV* ops (CVE-2021-37657)
    • Fixes an undefined behavior arising from reference binding to nullptr in MatrixSetDiagV* ops (CVE-2021-37658)
    • Fixes an undefined behavior arising from reference binding to nullptr and heap OOB in binary cwise ops (CVE-2021-37659)
    • Fixes a division by 0 in inplace operations (CVE-2021-37660)
    • Fixes a crash caused by integer conversion to unsigned (CVE-2021-37661)
    • Fixes an undefined behavior arising from reference binding to nullptr in boosted trees (CVE-2021-37662)
    • Fixes a heap OOB in boosted trees (CVE-2021-37664)
    • Fixes vulnerabilities arising from incomplete validation in QuantizeV2 (CVE-2021-37663)
    • Fixes vulnerabilities arising from incomplete validation in MKL requantization (CVE-2021-37665)
    • Fixes an undefined behavior arising from reference binding to nullptr in RaggedTensorToVariant (CVE-2021-37666)
    • Fixes an undefined behavior arising from reference binding to nullptr in unicode encoding (CVE-2021-37667)
    • Fixes an FPE in tf.raw_ops.UnravelIndex (CVE-2021-37668)
    • Fixes a crash in NMS ops caused by integer conversion to unsigned (CVE-2021-37669)
    • Fixes a heap OOB in UpperBound and LowerBound (CVE-2021-37670)
    • Fixes an undefined behavior arising from reference binding to nullptr in map operations (CVE-2021-37671)
    • Fixes a heap OOB in SdcaOptimizerV2 (CVE-2021-37672)
    • Fixes a CHECK-fail in MapStage (CVE-2021-37673)
    • Fixes a vulnerability arising from incomplete validation in MaxPoolGrad (CVE-2021-37674)
    • Fixes an undefined behavior arising from reference binding to nullptr in shape inference (CVE-2021-37676)
    • Fixes a division by 0 in most convolution operators (CVE-2021-37675)
    • Fixes vulnerabilities arising from missing validation in shape inference for Dequantize (CVE-2021-37677)
    • Fixes an arbitrary code execution due to YAML deserialization (CVE-2021-37678)
    • Fixes a heap OOB in nested tf.map_fn with RaggedTensors (CVE-2021-37679)

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.5.1

    This release introduces several vulnerability fixes:

    • Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
    • Fixes a floating point exception in SparseDenseCwiseDiv (CVE-2021-37636)
    • Fixes a null pointer dereference in CompressElement (CVE-2021-37637)
    • Fixes a null pointer dereference in RaggedTensorToTensor (CVE-2021-37638)
    • Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
    • Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
    • Fixes a division by 0 in ResourceScatterDiv (CVE-2021-37642)
    • Fixes a heap OOB in RaggedGather (CVE-2021-37641)
    • Fixes a std::abort raised from TensorListReserve (CVE-2021-37644)
    • Fixes a null pointer dereference in MatrixDiagPartOp (CVE-2021-37643)
    • Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
    • Fixes a bad allocation error in StringNGrams caused by integer conversion (CVE-2021-37646)
    • Fixes a null pointer dereference in SparseTensorSliceDataset (CVE-2021-37647)
    • Fixes an incorrect validation of SaveV2 inputs (CVE-2021-37648)
    • Fixes a null pointer dereference in UncompressElement (CVE-2021-37649)
    • Fixes a segfault and a heap buffer overflow in {Experimental,}DatasetToTFRecord (CVE-2021-37650)
    • Fixes a heap buffer overflow in FractionalAvgPoolGrad (CVE-2021-37651)
    • Fixes a use after free in boosted trees creation (CVE-2021-37652)
    • Fixes a division by 0 in ResourceGather (CVE-2021-37653)
    • Fixes a heap OOB and a CHECK fail in ResourceGather (CVE-2021-37654)
    • Fixes a heap OOB in ResourceScatterUpdate (CVE-2021-37655)
    • Fixes an undefined behavior arising from reference binding to nullptr in RaggedTensorToSparse

    ... (truncated)

    Commits
    • 8222c1c Merge pull request #51381 from tensorflow/mm-fix-r2.5-build
    • d584260 Disable broken/flaky test
    • f6c6ce3 Merge pull request #51367 from tensorflow-jenkins/version-numbers-2.5.1-17468
    • 3ca7812 Update version numbers to 2.5.1
    • 4fdf683 Merge pull request #51361 from tensorflow/mm-update-relnotes-on-r2.5
    • 05fc01a Put CVE numbers for fixes in parentheses
    • bee1dc4 Update release notes for the new patch release
    • 47beb4c Merge pull request #50597 from kruglov-dmitry/v2.5.0-sync-abseil-cmake-bazel
    • 6f39597 Merge pull request #49383 from ashahab/abin-load-segfault-r2.5
    • 0539b34 Merge pull request #48979 from liufengdb/r2.5-cherrypick
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 4
  • Experimental performance problem on exp3(GPR simulate data)

    Experimental performance problem on exp3(GPR simulate data)

    Hello! Thank you for the the implementation of the method proposed in paper published in ICLR2020. I have some questions when reproducing the experimental results.

    Our commands is the same as the 'exp3' in README.md . And the data we used comes from 'https://github.com/kurowasan/GraN-DAG/blob/master/data/data_p10_e40_n1000_GP.zip', which should be the same as yours. But the performance is less than the results presented in the paper.

    I'd appreciate it if you could tell me where the problem is.Thanks~

    # Our command:
    python main.py --max_length 10 --data_size 1000 --score_type BIC --reg_type GPR --read_data  --normalize --data_path ./data/data_p10_e40_n1000_GP_seed1 --lambda_flag_default --nb_epoch 20000 --input_dimension 128 --lambda_iter_num 1000
    
    # training log:
    2020-09-22 02:54:35,182 INFO - __main__ - [iter 18000] reward_batch: -4.175537586212158, max_reward: -3.6388002559621855, max_reward_batch: -3.7585916127016294
    2020-09-22 02:56:12,809 INFO - __main__ - [iter 18500] reward_batch: -4.152896881103516, max_reward: -3.6388002559621855, max_reward_batch: -3.823972058605565
    2020-09-22 02:58:37,596 INFO - __main__ - [iter 19000] lambda1 3.6388002559621855, upper 3.6388002559621855, lambda2 0.01, upper 0.01, score_min 3.6388002559621855, cyc_min 0.0
    2020-09-22 02:58:40,635 INFO - __main__ - before pruning: fdr 0.7727272727272727, tpr 0.2564102564102564, fpr 5.666666666666667, shd 35, nnz 44
    2020-09-22 02:58:40,636 INFO - __main__ - after  pruning: fdr 0.84375, tpr 0.1282051282051282, fpr 4.5, shd 37, nnz 32
    2020-09-22 02:58:40,806 INFO - __main__ - [iter 19000] reward_batch: -4.204100608825684, max_reward: -3.6388002559621855, max_reward_batch: -3.7716216965236127
    2020-09-22 03:00:17,209 INFO - __main__ - [iter 19500] reward_batch: -4.158496856689453, max_reward: -3.6388002559621855, max_reward_batch: -3.8702693792409377
    2020-09-22 03:02:40,728 INFO - __main__ - [iter 20000] lambda1 3.6388002559621855, upper 3.6388002559621855, lambda2 0.01, upper 0.01, score_min 3.6388002559621855, cyc_min 0.0
    2020-09-22 03:02:43,523 INFO - __main__ - before pruning: fdr 0.7727272727272727, tpr 0.2564102564102564, fpr 5.666666666666667, shd 35, nnz 44
    2020-09-22 03:02:43,523 INFO - __main__ - after  pruning: fdr 0.84375, tpr 0.1282051282051282, fpr 4.5, shd 37, nnz 32
    2020-09-22 03:02:43,733 INFO - __main__ - [iter 20000] reward_batch: -4.166861534118652, max_reward: -3.6388002559621855, max_reward_batch: -3.723014876623765
    2020-09-22 03:02:45,956 INFO - __main__ - Model saved in file: output/2020-09-22_00-52-32-724/model/tmp.ckpt-20000
    2020-09-22 03:02:45,957 INFO - __main__ - Training COMPLETED !
    
    opened by pp-payphone 4
  • self.loss1 = tf.reduce_mean(self.reward_baseline * self.log_softmax, 0) - 1 * self.lr1 * tf.reduce_mean(self.entropy_regularization, 0)

    self.loss1 = tf.reduce_mean(self.reward_baseline * self.log_softmax, 0) - 1 * self.lr1 * tf.reduce_mean(self.entropy_regularization, 0)

    Hello! I have read all of your code and the great paper published in ICLR2020. I have some questions about the code: 1 . in train step1, why the loss1 of the actor has to be: self.loss1 = tf.reduce_mean(self.reward_baseline * self.log_softmax, 0) - 1 * self.lr1 * tf.reduce_mean(self.entropy_regularization, 0) i didn't find any formula or explanation in the paper. 2. What does reward base line in this code use for? I would be appreciated if you can answer these questions.

    opened by Foyn 4
  • Adding NPVAR algorithm to the package

    Adding NPVAR algorithm to the package

    opened by AlxndrMlk 4
  • task of data generation

    task of data generation

    Hi. When I use the GUI web for the task of data generation. I found that the edges in the graph are not equal to those in the configuration parameters. When I change the seed while keeping the n_nodes and n_edges identical, the edges in the graph may also be changed. So What's the effect of seed? Thanks.

    opened by sususnow 1
  • error in convert_graph_int_to_adj_mat

    error in convert_graph_int_to_adj_mat

    Hello,

    I am trying to use Causal Disocvery RL on bnlearn benchmarks. I encounter the error in convert_graph_int_to_adj_mat function.

    The input to this function is

    [-1903318164   235405414   101482606   495790951   201853294   378349935
     -1634426101 -1718146065   134742090          64   134742086   134742084
       446475107   470616428 -1785775892 -1768316434   201884524   134217728
       201949548 -1903613075   470286702   101187694 -1734505621   503843118
     -2070074547   134217838   513518542   503875886   235405386   445754223
               0      524358   236432367   134742086   134217792   134217792
       503908622]
    

    And the error message follows:

    Traceback (most recent call last):
      File "main.py", line 337, in <module>
        main()
      File "main.py", line 285, in main
        graph_batch = convert_graph_int_to_adj_mat(graph_int)
      File "/home/user/Causal_Discovery_RL/src/helpers/analyze_utils.py", line 156, in convert_graph_int_to_adj_mat
        for curr_int in graph_int], dtype=int)
      File "/home/user/Causal_Discovery_RL/src/helpers/analyze_utils.py", line 156, in <listcomp>
        for curr_int in graph_int], dtype=int)
    ValueError: invalid literal for int() with base 10: '-'
    
    opened by pckennethma 7
Releases(1.0.1)
Owner
HUAWEI Noah's Ark Lab
Working with and contributing to the open source community in data mining, artificial intelligence, and related fields.
HUAWEI Noah's Ark Lab
TeachMyAgent is a testbed platform for Automatic Curriculum Learning methods in Deep RL.

TeachMyAgent: a Benchmark for Automatic Curriculum Learning in Deep RL Paper Website Documentation TeachMyAgent is a testbed platform for Automatic Cu

Flowers Team 51 Dec 25, 2022
NLMpy - A Python package to create neutral landscape models

NLMpy is a Python package for the creation of neutral landscape models that are widely used by landscape ecologists to model ecological patterns

Manaaki Whenua – Landcare Research 1 Oct 08, 2022
PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation

PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation Winner method of the ICCV-2021 SemKITTI-DVPS Challenge. [arxiv] [

Yuan Haobo 38 Jan 03, 2023
Datasets and pretrained Models for StyleGAN3 ...

Datasets and pretrained Models for StyleGAN3 ... Dear arfiticial friend, this is a collection of artistic datasets and models that we have put togethe

lucid layers 34 Oct 06, 2022
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-

Thomas Neumann 117 Nov 27, 2022
OpenL3: Open-source deep audio and image embeddings

OpenL3 OpenL3 is an open-source Python library for computing deep audio and image embeddings. Please refer to the documentation for detailed instructi

Music and Audio Research Laboratory - NYU 326 Jan 02, 2023
VOGUE: Try-On by StyleGAN Interpolation Optimization

VOGUE is a StyleGAN interpolation optimization algorithm for photo-realistic try-on. Top: shirt try-on automatically synthesized by our method in two different examples.

Wei ZHANG 66 Dec 09, 2022
NOMAD - A blackbox optimization software

################################################################################### #

Blackbox Optimization 78 Dec 29, 2022
Code for the paper "Adapting Monolingual Models: Data can be Scarce when Language Similarity is High"

Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling Adapting Monolingual Models: Data can be Scarce when Language Similarity is High

Wietse de Vries 5 Aug 02, 2021
Multi-Scale Progressive Fusion Network for Single Image Deraining

Multi-Scale Progressive Fusion Network for Single Image Deraining (MSPFN) This is an implementation of the MSPFN model proposed in the paper (Multi-Sc

Kuijiang 128 Nov 21, 2022
This library contains a Tensorflow implementation of the paper Stability Analysis of Unfolded WMMSE for Power Allocation

UWMMSE-stability Tensorflow implementation of Stability Analysis of UWMMSE Overview This library contains a Tensorflow implementation of the paper Sta

Arindam Chowdhury 1 Nov 16, 2022
Official Matlab Implementation for "Tiny Obstacle Discovery by Occlusion-aware Multilayer Regression", TIP 2020

Tiny Obstacle Discovery by Occlusion-aware Multilayer Regression Official Matlab Implementation for "Tiny Obstacle Discovery by Occlusion-aware Multil

Xuefeng 5 Jan 15, 2022
N-RPG - Novel role playing game da turfu

N-RPG Ce README sera la page de garde du projet. Contenu Il contiendra la présen

4 Mar 15, 2022
Implementation of "Semi-supervised Domain Adaptive Structure Learning"

Semi-supervised Domain Adaptive Structure Learning - ASDA This repo contains the source code and dataset for our ASDA paper. Illustration of the propo

3 Dec 13, 2021
Deep Structured Instance Graph for Distilling Object Detectors (ICCV 2021)

DSIG Deep Structured Instance Graph for Distilling Object Detectors Authors: Yixin Chen, Pengguang Chen, Shu Liu, Liwei Wang, Jiaya Jia. [pdf] [slide]

DV Lab 31 Nov 17, 2022
A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking

PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking PoseRBPF Paper Self-supervision Paper Pose Estimation Video Robot Manipulati

NVIDIA Research Projects 107 Dec 25, 2022
一个目标检测的通用框架(不需要cuda编译),支持Yolo全系列(v2~v5)、EfficientDet、RetinaNet、Cascade-RCNN等SOTA网络。

一个目标检测的通用框架(不需要cuda编译),支持Yolo全系列(v2~v5)、EfficientDet、RetinaNet、Cascade-RCNN等SOTA网络。

Haoyu Xu 203 Jan 03, 2023
Convert Mission Planner (ArduCopter) Waypoint Missions to Litchi CSV Format to execute on DJI Drones

Mission Planner to Litchi Convert Mission Planner (ArduCopter) Waypoint Surveys to Litchi CSV Format to execute on DJI Drones Litchi doesn't support S

Yaros 24 Dec 09, 2022
This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies.

Deformable Neural Radiance Fields This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies. Project Page Paper Video This codebase conta

Google 1k Jan 09, 2023