This is a tensorflow-based rotation detection benchmark, also called AlphaRotate.

Overview

AlphaRotate: A Rotation Detection Benchmark using TensorFlow

Documentation Status PyPI Downloads License

Abstract

AlphaRotate is maintained by Xue Yang with Shanghai Jiao Tong University supervised by Prof. Junchi Yan.

Papers and codes related to remote sensing/aerial image detection: DOTA-DOAI .

Techniques:

The above-mentioned rotation detectors are all modified based on the following horizontal detectors:

3

Projects

0

Latest Performance

DOTA (Task1)

Baseline

Backbone Neck Training/test dataset Data Augmentation Epoch NMS
ResNet50_v1d 600->800 FPN trainval/test × 13 (AP50) or 17 (AP50:95) is enough for baseline (default is 13) gpu nms (slightly worse <1% than cpu nms but faster)
Method Baseline DOTA1.0 DOTA1.5 DOTA2.0 Model Anchor Angle Pred. Reg. Loss Angle Range Configs
- RetinaNet-R 67.25 56.50 42.04 Baidu Drive (bi8b) R Reg. (∆⍬) smooth L1 [-90,0) dota1.0, dota1.5, dota2.0
- RetinaNet-H 64.17 56.10 43.06 Baidu Drive (bi8b) H Reg. (∆⍬) smooth L1 [-90,90) dota1.0, dota1.5, dota2.0
- RetinaNet-H 65.33 57.21 44.58 Baidu Drive (bi8b) H Reg. (sin⍬, cos⍬) smooth L1 [-90,90) dota1.0, dota1.5, dota2.0
- RetinaNet-H 65.73 58.87 44.16 Baidu Drive (bi8b) H Reg. (∆⍬) smooth L1 [-90,0) dota1.0, dota1.5, dota2.0
IoU-Smooth L1 RetinaNet-H 66.99 59.17 46.31 Baidu Drive (qcvc) H Reg. (∆⍬) iou-smooth L1 [-90,0) dota1.0, dota1.5, dota2.0
RIDet RetinaNet-H 66.06 58.91 45.35 Baidu Drive (njjv) H Quad. hungarian loss - dota1.0, dota1.5, dota2.0
RSDet RetinaNet-H 67.27 61.42 46.71 Baidu Drive (2a1f) H Quad. modulated loss - dota1.0, dota1.5, dota2.0
CSL RetinaNet-H 67.38 58.55 43.34 Baidu Drive (sdbb) H Cls.: Gaussian (r=1, w=10) smooth L1 [-90,90) dota1.0, dota1.5, dota2.0
DCL RetinaNet-H 67.39 59.38 45.46 Baidu Drive (m7pq) H Cls.: BCL (w=180/256) smooth L1 [-90,90) dota1.0, dota1.5, dota2.0
- FCOS 67.69 61.05 48.10 Baidu Drive (pic4) - Quad smooth L1 - dota1.0, dota1.5, dota2.0
RSDet++ FCOS 67.91 62.18 48.81 Baidu Drive (8ww5) - Quad modulated loss - dota1.0, dota1.5 dota2.0
GWD RetinaNet-H 68.93 60.03 46.65 Baidu Drive (7g5a) H Reg. (∆⍬) gwd [-90,0) dota1.0, dota1.5, dota2.0
GWD + SWA RetinaNet-H 69.92 60.60 47.63 Baidu Drive (qcn0) H Reg. (∆⍬) gwd [-90,0) dota1.0, dota1.5, dota2.0
BCD RetinaNet-H 71.23 60.78 47.48 Baidu Drive (0puk) H Reg. (∆⍬) bcd [-90,0) dota1.0, dota1.5, dota2.0
KLD RetinaNet-H 71.28 62.50 47.69 Baidu Drive (o6rv) H Reg. (∆⍬) kld [-90,0) dota1.0, dota1.5, dota2.0
KFIoU RetinaNet-H 70.64 62.71 48.04 Baidu Drive (o72o) H Reg. (∆⍬) kf [-90,0) dota1.0, dota1.5, dota2.0
R3Det RetinaNet-H 70.66 62.91 48.43 Baidu Drive (n9mv) H->R Reg. (∆⍬) smooth L1 [-90,0) dota1.0, dota1.5, dota2.0
DCL R3Det 71.21 61.98 48.71 Baidu Drive (eg2s) H->R Cls.: BCL (w=180/256) iou-smooth L1 [-90,0)->[-90,90) dota1.0, dota1.5, dota2.0
GWD R3Det 71.56 63.22 49.25 Baidu Drive (jb6e) H->R Reg. (∆⍬) smooth L1->gwd [-90,0) dota1.0, dota1.5, dota2.0
BCD R3Det 72.22 63.53 49.71 Baidu Drive (v60g) H->R Reg. (∆⍬) bcd [-90,0) dota1.0, dota1.5, dota2.0
KLD R3Det 71.73 65.18 50.90 Baidu Drive (tq7f) H->R Reg. (∆⍬) kld [-90,0) dota1.0, dota1.5, dota2.0
KFIoU R3Det 72.28 64.69 50.41 Baidu Drive (u77v) H->R Reg. (∆⍬) kf [-90,0) dota1.0, dota1.5, dota2.0
- R2CNN (Faster-RCNN) 72.27 66.45 52.35 Baidu Drive (02s5) H->R Reg. (∆⍬) smooth L1 [-90,0) dota1.0, dota1.5 dota2.0

SOTA

Method Backbone DOTA1.0 Model MS Data Augmentation Epoch Configs
R2CNN-BCD ResNet152_v1d-FPN 79.54 Baidu Drive (h2u1) 34 dota1.0
RetinaNet-BCD ResNet152_v1d-FPN 78.52 Baidu Drive (0puk) 51 dota1.0
R3Det-BCD ResNet50_v1d-FPN 79.08 Baidu Drive (v60g) 51 dota1.0
R3Det-BCD ResNet152_v1d-FPN 79.95 Baidu Drive (v60g) 51 dota1.0

Note:

  • Single GPU training: SAVE_WEIGHTS_INTE = iter_epoch * 1 (DOTA1.0: iter_epoch=27000, DOTA1.5: iter_epoch=32000, DOTA2.0: iter_epoch=40000)
  • Multi-GPU training (better): SAVE_WEIGHTS_INTE = iter_epoch * 2

Installation

Manual configuration

pip install -r requirements.txt
pip install -v -e .  # or "python setup.py develop"

Or, you can simply install AlphaRotate with the following command:

pip install alpharotate

Docker

docker images: yangxue2docker/yx-tf-det:tensorflow1.13.1-cuda10-gpu-py3

Note: For 30xx series graphics cards, I recommend this blog to install tf1.xx, or download image from tensorflow-release-notes according to your development environment, e.g. nvcr.io/nvidia/tensorflow:20.11-tf1-py3

Download Model

Pretrain weights

Download a pretrain weight you need from the following three options, and then put it to $PATH_ROOT/dataloader/pretrained_weights.

  1. MxNet pretrain weights (recommend in this repo, default in NET_NAME): resnet_v1d, resnet_v1b, refer to gluon2TF.
  1. Tensorflow pretrain weights: resnet50_v1, resnet101_v1, resnet152_v1, efficientnet, mobilenet_v2, darknet53 (Baidu Drive (1jg2), Google Drive).
  2. Pytorch pretrain weights, refer to pretrain_zoo.py and Others.

Trained weights

  1. Please download trained models by this project, then put them to $PATH_ROOT/output/pretained_weights.

Train

  1. If you want to train your own dataset, please note:

    (1) Select the detector and dataset you want to use, and mark them as #DETECTOR and #DATASET (such as #DETECTOR=retinanet and #DATASET=DOTA)
    (2) Modify parameters (such as CLASS_NUM, DATASET_NAME, VERSION, etc.) in $PATH_ROOT/libs/configs/#DATASET/#DETECTOR/cfgs_xxx.py
    (3) Copy $PATH_ROOT/libs/configs/#DATASET/#DETECTOR/cfgs_xxx.py to $PATH_ROOT/libs/configs/cfgs.py
    (4) Add category information in $PATH_ROOT/libs/label_name_dict/label_dict.py     
    (5) Add data_name to $PATH_ROOT/dataloader/dataset/read_tfrecord.py  
    
  2. Make tfrecord
    If image is very large (such as DOTA dataset), the image needs to be cropped. Take DOTA dataset as a example:

    cd $PATH_ROOT/dataloader/dataset/DOTA
    python data_crop.py
    

    If image does not need to be cropped, just convert the annotation file into xml format, refer to example.xml.

    cd $PATH_ROOT/dataloader/dataset/  
    python convert_data_to_tfrecord.py --root_dir='/PATH/TO/DOTA/' 
                                       --xml_dir='labeltxt'
                                       --image_dir='images'
                                       --save_name='train' 
                                       --img_format='.png' 
                                       --dataset='DOTA'
    
  3. Start training

    cd $PATH_ROOT/tools/#DETECTOR
    python train.py
    

Test

  1. For large-scale image, take DOTA dataset as a example (the output file or visualization is in $PATH_ROOT/tools/#DETECTOR/test_dota/VERSION):

    cd $PATH_ROOT/tools/#DETECTOR
    python test_dota.py --test_dir='/PATH/TO/IMAGES/'  
                        --gpus=0,1,2,3,4,5,6,7  
                        -ms (multi-scale testing, optional)
                        -s (visualization, optional)
                        -cn (use cpu nms, slightly better <1% than gpu nms but slower, optional)
    
    or (recommend in this repo, better than multi-scale testing)
    
    python test_dota_sota.py --test_dir='/PATH/TO/IMAGES/'  
                             --gpus=0,1,2,3,4,5,6,7  
                             -s (visualization, optional)
                             -cn (use cpu nms, slightly better <1% than gpu nms but slower, optional)
    

    Notice: In order to set the breakpoint conveniently, the read and write mode of the file is' a+'. If the model of the same #VERSION needs to be tested again, the original test results need to be deleted.

  2. For small-scale image, take HRSC2016 dataset as a example:

    cd $PATH_ROOT/tools/#DETECTOR
    python test_hrsc2016.py --test_dir='/PATH/TO/IMAGES/'  
                            --gpu=0
                            --image_ext='bmp'
                            --test_annotation_path='/PATH/TO/ANNOTATIONS'
                            -s (visualization, optional)
    

Tensorboard

cd $PATH_ROOT/output/summary
tensorboard --logdir=.

1

2

Citation

If you find our code useful for your research, please consider cite.

@article{yang2021alpharotate,
    author  = {Yang, Xue and Zhou, Yue and Yan, Junchi},
    title   = {AlphaRotate: A Rotation Detection Benchmark using TensorFlow},
    year    = {2021},
    url     = {https://github.com/yangxue0827/RotationDetection}
}

Reference

1、https://github.com/endernewton/tf-faster-rcnn
2、https://github.com/zengarden/light_head_rcnn
3、https://github.com/tensorflow/models/tree/master/research/object_detection
4、https://github.com/fizyr/keras-retinanet

Comments
  • "ValueError: too many values to unpack" RotationDetection/alpharotate/libs/models/detectors/scrdet/build_whole_network.py 147Line

    Problem path: RotationDetection/alpharotate/libs/models/detectors/scrdet/build_whole_network.py

    147 Line. feature, pa_mask = self.build_backbone(input_img_batch)

    Since only one variable is returned, why store it in two variables?

    It is error that "ValueError: too many values to unpack"

    what is pa_mask parameter's mean and rule?

    opened by sangheonEN 38
  • Gradient about IOU-Smooth L1 loss in SCRDet

    Gradient about IOU-Smooth L1 loss in SCRDet

    here's the relative link

    In the link I say the backward gradient will be 0 eternally.

    In other point, make the |u| underivable that gradient will not be 0. But the gradient of u/|u| is not 1 anymore.

    @yangxue0827 Could you please help me out? Many thanks!

    opened by igo312 10
  • How may I use this repo for building orientation detection ?

    How may I use this repo for building orientation detection ?

    Sorry if my question is irrelevant, but this is what I am instructed to do as an assignment. I am instructed to use this repo for detecting the orientation of various building in an image. Please guide me for the same.

    opened by neutr0nStar 8
  • Request for model files tested on DOTA-v1.5

    Request for model files tested on DOTA-v1.5

    opened by chandlerbing65nm 7
  • Alternative cloud for trained models

    Alternative cloud for trained models

    Hello @yangxue0827,

    I cannot create a Baidu account. Could you upload the trained models in another cloud such as Dropbox, Onedrive o Google Drive? If it is not possible ... could you send me all these trained models via email? Thank you so much.

    Best regards, Roberto Valle

    opened by bobetocalo 7
  • Download Trained Model

    Download Trained Model

    Hello

    I would like to know, where I can download the trained model (not pretrained):

    "Please download trained models by this project, then put them to trained_weights."

    When I go to Baidu and enter the code from here: https://github.com/yangxue0827/RotationDetection/issues/29#issuecomment-896431674 I still cannot download as some pop up comes up that I cannot read. Is there maybe any other place (Google Drive) where I can find it?

    Best regards and thank you

    opened by Testbild 6
  • Where is trained models?

    Where is trained models?

    Thank you for your contribution to the detection community. And I notice that “Latest: More results and trained models are available in the MODEL_ZOO.md.” in README.md,but I can’t find the “MODEL_ZOO.md”. If you could provide the trained model,words cannot express how thankful I am.

    opened by 1995gatch 6
  • ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.

    ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.

    在跑scrnet时出现问题,打印g之后发现 $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Tensor("tower_0/clip_by_norm_88:0", shape=(3, 3, 256, 256), dtype=float32, device=/device:GPU:0) $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Tensor("tower_1/clip_by_norm_88:0", shape=(3, 3, 256, 256), dtype=float32, device=/device:GPU:1) $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Tensor("tower_0/clip_by_norm_89:0", shape=(1, 1, 256, 1024), dtype=float32, device=/device:GPU:0) $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Tensor("tower_1/clip_by_norm_89:0", shape=(1, 1, 256, 1024), dtype=float32, device=/device:GPU:1) $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ None

    opened by xxx0320 6
  • 关于R3det_kl的训练问题

    关于R3det_kl的训练问题

    大佬你好! 想请教一下,在r3det_kl的训练过程中,使用resnet_50的pretrained weights,reg_loss一直震荡,想问问是哪一步出现了问题呢?

    cfgs设置和loss图像如下。

    SAVE_WEIGHTS_INTE = 27000 * 1 CLS_WEIGHT = 1.0 REG_WEIGHT = 2.0 REG_LOSS_MODE = 3 # KLD loss

    sendpix0

    opened by yanglcs 5
  • How to test train and test  retinanet-gwd in HRSC2016 dataset?

    How to test train and test retinanet-gwd in HRSC2016 dataset?

    1.i have downloaded trained models by this project, then put them to $PATH_ROOT/output/pretained_weights. the pretained_weights is resnet_v1d. 2. i have compiled .

    cd $PATH_ROOT/libs/utils/cython_utils
    rm *.so
    rm *.c
    rm *.cpp
    python setup.py build_ext --inplace (or make)
    
    cd $PATH_ROOT/libs/utils/
    rm *.so
    rm *.c
    rm *.cpp
    python setup.py build_ext --inplace
    
    1. i have Copied $PATH_ROOT/libs/configs/HRSC2016/gwd/cfgs_res50_hrsc2016_gwd_v6.py to$PATH_ROOT/libs/configs/cfgs.py
    2. the structure directory of HRSC2016 Dataset image

    5.when i python tools/gwd/train.py ,i got some errors.

    2021-09-05 07:49:28.459600: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at matching_files_op.cc:49 : Not found: ../../dataloader/tfrecord; No such file or directory 2021-09-05 07:49:28.534617: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at matching_files_op.cc:49 : Not found: ../../dataloader/tfrecord; No such file or directory Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call return fn(*args) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn target_list, run_metadata) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.NotFoundError: ../../dataloader/tfrecord; No such file or directory [[{{node get_batch/matching_filenames/MatchingFiles}}]]

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "train.py", line 160, in trainer.main() File "train.py", line 155, in main self.log_printer(gwd, optimizer, global_step, tower_grads, total_loss_dict, num_gpu, graph) File "../../tools/train_base.py", line 196, in log_printer sess.run(init_op) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 956, in run run_metadata_ptr) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1180, in _run feed_dict_tensor, options, run_metadata) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run run_metadata) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.NotFoundError: ../../dataloader/tfrecord; No such file or directory [[node get_batch/matching_filenames/MatchingFiles (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]

    Original stack trace for 'get_batch/matching_filenames/MatchingFiles': File "train.py", line 160, in trainer.main() File "train.py", line 53, in main is_training=True) File "../../dataloader/dataset/read_tfrecord.py", line 115, in next_batch filename_tensorlist = tf.train.match_filenames_once(pattern) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/input.py", line 76, in match_filenames_once name=name, initial_value=io_ops.matching_files(pattern), File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/gen_io_ops.py", line 464, in matching_files "MatchingFiles", pattern=pattern, name=name) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper op_def=op_def) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py", line 513, in new_func return func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op attrs, op_def, compute_device) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal op_def=op_def) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 1748, in init self._traceback = tf_stack.extract_stack()

    Can you help me solve this problem? i hope your reply

    opened by myGithubSiki 5
  • 为何nms没有起作用?

    为何nms没有起作用?

    大佬您好,我想请教下为何nms好像没有起作用。系统是Ubuntu18.04,nms部分已经编译了。算法用的是gwd,训练和测试均能正常进行,就是有的好像经过了nms处理,有的又没有,而且训练和测试的时候我都把nms开启了

    post-processing

    NMS = True NMS_IOU_THRESHOLD = 0.45 MAXIMUM_DETECTIONS = 200 FILTERED_SCORE = 0.5 VIS_SCORE = 0.4

    test and eval

    TEST_SAVE_PATH = os.path.join(ROOT_PATH, 'tools/test_result') EVALUATE_R_DIR = os.path.join(ROOT_PATH, 'output/evaluate_result_pickle/') USE_07_METRIC = True EVAL_THRESHOLD = 0.45

    训练: train

    测试: DJI_0005_000090

    opened by Chrispaoge 4
  • not found the implementation of indirect angle regression

    not found the implementation of indirect angle regression

    Great work, thanks for sharing the project. I did not find the implementation of the indirect angle regression which is explained in your TPAMI paper. Have I missed or this part is not included in this public project? Thanks.

    opened by menchael 0
  • Baidu signup impossible for non-chinese users, access to models possible through other apps?

    Baidu signup impossible for non-chinese users, access to models possible through other apps?

    Hi, Baidu doesn't accept international phone numbers so won't allow signing up. This will extremely limit the usage of your model. Have you copied your model weights in another platform such as google drive or microsoft onedrive?

    Please let me know. Thank you

    opened by shnamin 0
  • Tranfer Learning

    Tranfer Learning

    Hello, I have a question regarding training a custom dataset.

    How can I transfer learning of some specific classes from the pre-trained weights (e.g. dota) to my custom training if my custom classes are different from the pre-trained classes?
    

    Best regards and Thank you

    opened by wafa-bouzouita 1
Releases(v1.0.1)
Owner
yangxue
Welcome academic cooperation. WeChat: yangxue-0826
yangxue
An end-to-end regression problem of predicting the price of properties in Bangalore.

Bangalore-House-Price-Prediction An end-to-end regression problem of predicting the price of properties in Bangalore. Deployed in Heroku using Flask.

Shruti Balan 1 Nov 25, 2022
Official PyTorch implementation of paper: Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation (ICCV 2021 Oral Presentation)

SML (ICCV 2021, Oral) : Official Pytorch Implementation This repository provides the official PyTorch implementation of the following paper: Standardi

SangHun 61 Dec 27, 2022
Deploy optimized transformer based models on Nvidia Triton server

Deploy optimized transformer based models on Nvidia Triton server

Lefebvre Sarrut Services 1.2k Jan 05, 2023
Flexible Networks for Learning Physical Dynamics of Deformable Objects (2021)

Flexible Networks for Learning Physical Dynamics of Deformable Objects (2021) By Jinhyung Park, Dohae Lee, In-Kwon Lee from Yonsei University (Seoul,

Jinhyung Park 0 Jan 09, 2022
Code release for the paper “Worldsheet Wrapping the World in a 3D Sheet for View Synthesis from a Single Image”, ICCV 2021.

Worldsheet: Wrapping the World in a 3D Sheet for View Synthesis from a Single Image This repository contains the code for the following paper: R. Hu,

Meta Research 37 Jan 04, 2023
Official pytorch implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion"

DSPoint Official implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion". Paper link: https://arxiv.org/abs/2111.10

Ziyao Zeng 14 Feb 26, 2022
On-device wake word detection powered by deep learning.

Porcupine Made in Vancouver, Canada by Picovoice Porcupine is a highly-accurate and lightweight wake word engine. It enables building always-listening

Picovoice 2.8k Dec 29, 2022
Implicit Graph Neural Networks

Implicit Graph Neural Networks This repository is the official PyTorch implementation of "Implicit Graph Neural Networks". Fangda Gu*, Heng Chang*, We

Heng Chang 48 Nov 29, 2022
This script scrapes and stores the availability of timeslots for Car Driving Test at all RTA Serivce NSW centres in the state.

This script scrapes and stores the availability of timeslots for Car Driving Test at all RTA Serivce NSW centres in the state. Dependencies Account wi

Balamurugan Soundararaj 21 Dec 14, 2022
Unsupervised Discovery of Object Radiance Fields

Unsupervised Discovery of Object Radiance Fields by Hong-Xing Yu, Leonidas J. Guibas and Jiajun Wu from Stanford University. arXiv link: https://arxiv

Hong-Xing Yu 148 Nov 30, 2022
Implementation of the master's thesis "Temporal copying and local hallucination for video inpainting".

Temporal copying and local hallucination for video inpainting This repository contains the implementation of my master's thesis "Temporal copying and

David Álvarez de la Torre 1 Dec 02, 2022
Official repository for the paper F, B, Alpha Matting

FBA Matting Official repository for the paper F, B, Alpha Matting. This paper and project is under heavy revision for peer reviewed publication, and s

Marco Forte 404 Jan 05, 2023
Fast Differentiable Matrix Sqrt Root

Fast Differentiable Matrix Sqrt Root Geometric Interpretation of Matrix Square Root and Inverse Square Root This repository constains the official Pyt

YueSong 42 Dec 30, 2022
Deep Reinforcement Learning based Trading Agent for Bitcoin

Deep Trading Agent Deep Reinforcement Learning based Trading Agent for Bitcoin using DeepSense Network for Q function approximation. For complete deta

Kartikay Garg 669 Dec 29, 2022
A note taker for NVDA. Allows the user to create, edit, view, manage and export notes to different formats.

Quick Notetaker add-on for NVDA The Quick Notetaker add-on is a wonderful tool which allows writing notes quickly and easily anytime and from any app

5 Dec 06, 2022
Person Re-identification

Person Re-identification Final project of Computer Vision Table of content Person Re-identification Table of content Students: Proposed method Dataset

Nguyễn Hoàng Quân 4 Jun 17, 2021
Simple Tensorflow implementation of Toward Spatially Unbiased Generative Models (ICCV 2021)

Spatial unbiased GANs — Simple TensorFlow Implementation [Paper] : Toward Spatially Unbiased Generative Models (ICCV 2021) Abstract Recent image gener

Junho Kim 16 Apr 15, 2022
Latte: Cross-framework Python Package for Evaluation of Latent-based Generative Models

Cross-framework Python Package for Evaluation of Latent-based Generative Models Latte Latte (for LATent Tensor Evaluation) is a cross-framework Python

Karn Watcharasupat 30 Sep 08, 2022
Gif-caption - A straightforward GIF Captioner written in Python

Broksy's GIF Captioner Have you ever wanted to easily caption a GIF without havi

3 Apr 09, 2022