This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

Overview

OpenHGNN

This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch. We integrate SOTA models of heterogeneous graph.

Key Features

  • Easy-to-Use: OpenHGNN provides easy-to-use interfaces for running experiments with the given models and dataset. Besides, we also integrate optuna to get hyperparameter optimization.
  • Extensibility: User can define customized task/model/dataset to apply new models to new scenarios.
  • Efficiency: The backend dgl provides efficient APIs.

Get Started

Requirements and Installation

  • Python >= 3.6

  • PyTorch >= 1.7.1

  • DGL >= 0.7.0

  • CPU or NVIDIA GPU, Linux, Python3

1. Python environment (Optional): We recommend using Conda package manager

conda create -n openhgnn python=3.7
source activate openhgnn

2. Pytorch: Install PyTorch. For example:

# CUDA versions: cpu, cu92, cu101, cu102, cu101, cu111
pip install torch==1.8.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html

3. DGL: Install DGL, follow their instructions. For example:

# CUDA versions: cpu, cu101, cu102, cu110, cu111
pip install --pre dgl-cu101 -f https://data.dgl.ai/wheels-test/repo.html

4. OpenHGNN and other dependencies:

git clone https://github.com/BUPT-GAMMA/OpenHGNN
cd OpenHGNN
pip install -r requirements.txt

Running an existing baseline model on an existing benchmark dataset

python main.py -m model_name -d dataset_name -t task_name -g 0 --use_best_config

usage: main.py [-h] [--model MODEL] [--task TASK] [--dataset DATASET] [--gpu GPU] [--use_best_config]

optional arguments: -h, --help show this help message and exit

​ --model MODEL, -m MODEL name of models

​ --task TASK, -t TASK name of task

​ --dataset DATASET, -d DATASET name of datasets

​ --gpu GPU, -g GPU controls which gpu you will use. If you do not have gpu, set -g -1.

​ --use_best_config use_best_config means you can use the best config in the dataset with the model. If you want to set the different hyper-parameter, modify the openhgnn.config.ini manually. The best_config will override the parameter in config.ini.

​ --use_hpo Besides use_best_config, we give a hyper-parameter example to search the best hyper-parameter automatically.

e.g.:

python main.py -m GTN -d imdb4GTN -t node_classification -g 0 --use_best_config

It is under development, and we release it in a nightly build version. For now, we just give some new models, such as HetGNN, NSHE, GTN, MAGNN, RSHN.

Note: If you are interested in some model, you can refer to the below models list.

Refer to the docs to get more basic and depth usage.

Models

Supported Models with specific task

The link will give some basic usage.

Model Node classification Link prediction Recommendation
RGCN[ESWC 2018] ✔️ ✔️
HAN[WWW 2019] ✔️
KGCN[WWW 2019] ✔️
HetGNN[KDD 2019] ✔️ ✔️
GTN[NeurIPS 2019] ✔️
RSHN[ICDM 2019] ✔️
DGMI[AAAI 2020] ✔️
MAGNN[WWW 2020] ✔️
CompGCN[ICLR 2020] ✔️ ✔️
NSHE[IJCAI 2020] ✔️
NARS[arxiv] ✔️
MHNF[arxiv] ✔️
HGSL[AAAI 2021] ✔️
HGNN-AC[WWW 2021] ✔️
HPN[TKDE 2021] ✔️
RHGNN[arxiv] ✔️

To be supported models

  • Metapath2vec[KDD 2017]

Candidate models

Contributors

GAMMA LAB [BUPT]: Tianyu Zhao, Yaoqi Liu, Fengqi Liang, Yibo Li, Yanhu Mo, Donglin Xia, Xinlong Zhai, Siyuan Zhang, Qi Zhang, Chuan Shi, Cheng Yang, Xiao Wang

BUPT: Jiahang Li, Anke Hu

DGL Team: Quan Gan, Jian Zhang

Comments
  • Attribute error

    Attribute error

    I am training HetGNN model for node classification. when i try to run the script for training. I get the following error. Please help me AttributeError: 'dict' object has no attribute 'srcdata'

    opened by faizan1234567 13
  • error in HetGNN_sampler.py

    error in HetGNN_sampler.py

    line 168, in assign_features_to_blocks assign_simple_node_features(blocks[0].srcdata, g, ntypes) AttributeError: 'dict' object has no attribute 'srcdata'

    opened by Kingrd97 10
  • 关于HetGNN-emb有完全相同的情况

    关于HetGNN-emb有完全相同的情况

    通过HetGNN跑提供的academic4HetGNN.zip 数据集。emb结果有完全相同的情况发生,原因未知。请问是否是符合预期的?

    如下测试:

    `import numpy as np

    emb = np.load('emb50.npy') list = emb[:,0]

    for i in np.unique(list): idx = np.argwhere(list == i) r = idx.reshape(1, -1).squeeze(0) if len(r) > 1: print('index for {}:\n'.format(i), r) for j in r: print(emb[j]) `

    opened by lixusign 9
  • Error to run without Cuda

    Error to run without Cuda

    File "C:\Users\XyZ\OpenHGNN\openhgnn\models\GTN_sparse.py", line 220, in forward sum_g = dgl.adj_sum_graph(A, 'w_sum') AttributeError: module 'dgl' has no attribute 'adj_sum_graph'

    This Issue came-up while I ran the command= python main.py -m GTN -d imdb4GTN -t node_classification -g -1 --use_best_config

    Can someone say me where I went wrong?

    opened by M-Somtirth 4
  • 无法使用gpu训练

    无法使用gpu训练

    python main.py -m KGCN -d LastFM4KGCN -t recommendation -g 0 --use_best_config

    RuntimeError: Tensor for argument #2 'mat1' is on CPU, but expected it to be on GPU (while checking arguments for addmm)

    opened by Tingting-Liu-star 4
  • where I can find the dataset build program ?

    where I can find the dataset build program ?

    for ex : https://github.com/BUPT-GAMMA/OpenHGNN/tree/main/openhgnn/dataset#academic4HetGNN in this dataset

    when I extract_archive , A bin file about g。

    but where I can find , how to build this dataset use dgl standalone program ?

    opened by lixusign 4
  • Error when running GTN&fastGTN

    Error when running GTN&fastGTN

    Thank you very much for being able to provide this tool. I get an error when I run fastGTN using:

    python main.py -m fastGTN -t node_classification -d acm4GTN -g 0 --use_best_config

    The error is as follows:

    Traceback (most recent call last): File "D:/github/OpenHGNN/main.py", line 30, in OpenHGNN(args=config) File "D:\github\OpenHGNN\openhgnn\start.py", line 19, in OpenHGNN result = flow.train() File "D:\github\OpenHGNN\openhgnn\trainerflow\node_classification.py", line 112, in train train_loss = self._full_train_step() File "D:\github\OpenHGNN\openhgnn\trainerflow\node_classification.py", line 152, in _full_train_step logits = self.model(self.hg, h_dict)[self.category] File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "D:\github\OpenHGNN\openhgnn\models\fastGTN.py", line 119, in forward hat_A = self.layersi File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "D:\github\OpenHGNN\openhgnn\models\fastGTN.py", line 180, in forward sum_g = dgl.adj_sum_graph(A, 'w_sum') File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\dgl\transforms\functional.py", line 2766, in adj_sum_graph C_gidx, C_weights = F.csrsum(gidxs, weights) File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\dgl\backend\pytorch\sparse.py", line 817, in csrsum nrows, ncols, C_indptr, C_indices, C_eids, C_weights = CSRSum.apply(gidxs, *weights) File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\dgl\backend\pytorch\sparse.py", line 668, in forward gidxC, C_weights = _csrsum(gidxs, weights) File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\dgl\sparse.py", line 776, in _csrsum C, C_weights = _CAPI_DGLCSRSum(As, [F.to_dgl_nd(w) for w in A_weights]) File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\dgl_ffi_ctypes\function.py", line 188, in call check_call(_LIB.DGLFuncCall( File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\dgl_ffi\base.py", line 65, in check_call raise DGLError(py_str(_LIB.DGLGetLastError())) dgl._ffi.base.DGLError: [15:31:21] C:\Users\Administrator\dgl-0.5\src\array\kernel.cc:471: Check failed: A[i].indptr->dtype == idtype (int64 vs. int32) : The ID types of all graphs must be equal.

    I use the following software versions:

    python = 3.8 cudatoolkit = 11.3.1 torch = 1.11.0+cu113 dgl-cu113 = 0.8.1 & 0.8.0

    Then I ran the same version of the software on my ubuntu server with no errors.

    opened by huihuijiangqiang 3
  • bugs in minibatch trainning

    bugs in minibatch trainning

    🐛 Bug

    To Reproduce

    error occurred in _mini_train_step function in trainerflow/node_classification.py when use mini_batch_flag in node_classification task and SimpleHGN model

    import argparse
    from openhgnn.experiment import Experiment
    
    if __name__ == '__main__':
        parser = argparse.ArgumentParser()
        parser.add_argument('--model', '-m', default='SimpleHGN', type=str, help='name of models')
        parser.add_argument('--task', '-t', default='node_classification', type=str, help='name of task')
        # link_prediction / node_classification
        parser.add_argument('--dataset', '-d', default='imdb4MAGNN', type=str, help='name of datasets')
        parser.add_argument('--gpu', '-g', default='0', type=int, help='-1 means cpu')
        parser.add_argument('--use_best_config', action='store_true', help='will load utils.best_config')
        parser.add_argument('--load_from_pretrained', action='store_true', help='load model from the checkpoint')
        args = parser.parse_args()
    
        experiment = Experiment(model=args.model, dataset=args.dataset, task=args.task, gpu=args.gpu,
                                use_best_config=args.use_best_config, load_from_pretrained=args.load_from_pretrained, mini_batch_flag = True, batch_size=64)
        experiment.run()
    
    

    Expected behavior

    Minibatch training on a large heterograph

    Environment

    • torch==1.12.1
    • dgl-cu113==0.9.0 # for CUDA support
    • openhgnn==0.3.0
    • Linux
    • Python 3.8.13

    Additional context

    • the default minibatch sampler is MultiLayerFullNeighborSampler
    • the blocks is a list (line 164) and the expected input in the forward function of the model (e.g. SimpleHGN) is a hg(line 159)
    for i, (input_nodes, seeds, blocks) in enumerate(loader_tqdm):
        blocks = [blk.to(self.device) for blk in blocks]
        ...
        logits = self.model(blocks, emb)[self.category]
    
    def forward(self, hg, h_dict):
        with hg.local_scope():
            hg.ndata['h'] = h_dict
    
    opened by suxnju 2
  • 关于HetGNN的emb顺序困惑

    关于HetGNN的emb顺序困惑

    请教下 在 x = self.model(blocks[0], input_features) 中返回的x 是dict 。 他里面每种node_type 的emb 和blocks[0] 的入参的点的顺序如何对应?

    我核对了以后 发现并不是 blocks[0].srcnodes[node_type].data[dgl.NID] 所代表的节点顺序。

    opened by lixusign 2
  • HIN_LinkPrediction' object has no attribute 'get_idx'

    HIN_LinkPrediction' object has no attribute 'get_idx'

    \OpenHGNN-main\openhgnn\tasks\link_prediction.py", line 32, in init self.train_hg, self.val_hg, self.test_hg = self.dataset.get_idx() AttributeError: 'HIN_LinkPrediction' object has no attribute 'get_idx'

    opened by xuptacm 2
  • 无法使用ACM4GTN数据集运行GTN

    无法使用ACM4GTN数据集运行GTN

    运行 python main.py -m GTN -t node_classification -d acm4GTN -g 0 --use_best_config

    报错信息 Using backend: pytorch Use the best config. Done saving data into cached files. Modify the out_dim with num_classes 0%| | 0/50 [00:00<?, ?it/s] Traceback (most recent call last): File "main.py", line 24, in OpenHGNN(args=config) File "/home/special/user/lihaoran/OpenHGNN_clone_from_github/openhgnn/start.py", line 17, in OpenHGNN result = flow.train() File "/home/special/user/lihaoran/OpenHGNN_clone_from_github/openhgnn/trainerflow/node_classification.py", line 77, in train loss = self._full_train_step() File "/home/special/user/lihaoran/OpenHGNN_clone_from_github/openhgnn/trainerflow/node_classification.py", line 109, in _full_train_step loss.backward() File "/opt/miniconda3/lib/python3.7/site-packages/torch/_tensor.py", line 255, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/opt/miniconda3/lib/python3.7/site-packages/torch/autograd/init.py", line 149, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag File "/opt/miniconda3/lib/python3.7/site-packages/torch/autograd/function.py", line 87, in apply return self._forward_cls.backward(self, *args) # type: ignore[attr-defined] File "/opt/miniconda3/lib/python3.7/site-packages/dgl/backend/pytorch/sparse.py", line 544, in backward gidxA.reverse(), A_weights, gidxC, dC_weights, gidxB.number_of_ntypes()) File "/opt/miniconda3/lib/python3.7/site-packages/dgl/backend/pytorch/sparse.py", line 638, in csrmm CSRMM.apply(gidxA, A_weights, gidxB, B_weights, num_vtypes) File "/opt/miniconda3/lib/python3.7/site-packages/dgl/backend/pytorch/sparse.py", line 528, in forward gidxC, C_weights = _csrmm(gidxA, A_weights, gidxB, B_weights, num_vtypes) File "/opt/miniconda3/lib/python3.7/site-packages/dgl/sparse.py", line 548, in _csrmm A, F.to_dgl_nd(A_weights), B, F.to_dgl_nd(B_weights), num_vtypes) File "dgl/_ffi/_cython/./function.pxi", line 287, in dgl._ffi._cy3.core.FunctionBase.call File "dgl/_ffi/_cython/./function.pxi", line 232, in dgl._ffi._cy3.core.FuncCall File "dgl/_ffi/_cython/./base.pxi", line 155, in dgl._ffi._cy3.core.CALL dgl._ffi.base.DGLError: [17:18:53] /opt/dgl/src/array/cuda/csr_mm.cu:87: Check failed: e == CUSPARSE_STATUS_SUCCESS: CUSPARSE ERROR: 11 Stack trace: [bt] (0) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x4f) [0x7fd13c2565df] [bt] (1) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(std::pair<dgl::aten::CSRMatrix, dgl::runtime::NDArray> dgl::aten::cusparse::CusparseSpgemm<float, int>(dgl::aten::CSRMatrix const&, dgl::runtime::NDArray, dgl::aten::CSRMatrix const&, dgl::runtime::NDArray)+0x625) [0x7fd13c6accd5] [bt] (2) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(std::pair<dgl::aten::CSRMatrix, dgl::runtime::NDArray> dgl::aten::CSRMM<2, long, float>(dgl::aten::CSRMatrix const&, dgl::runtime::NDArray, dgl::aten::CSRMatrix const&, dgl::runtime::NDArray)+0x59e) [0x7fd13c6af81e] [bt] (3) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(dgl::aten::CSRMM(dgl::aten::CSRMatrix, dgl::runtime::NDArray, dgl::aten::CSRMatrix, dgl::runtime::NDArray)+0x10d6) [0x7fd13c493466] [bt] (4) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(+0x48cfa8) [0x7fd13c493fa8] [bt] (5) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(+0x48d724) [0x7fd13c494724] [bt] (6) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(DGLFuncCall+0x48) [0x7fd13c4d5c78] [bt] (7) /opt/miniconda3/lib/python3.7/site-packages/dgl/_ffi/_cy3/core.cpython-37m-x86_64-linux-gnu.so(+0x163ea) [0x7fd1136f03ea] [bt] (8) /opt/miniconda3/lib/python3.7/site-packages/dgl/_ffi/_cy3/core.cpython-37m-x86_64-linux-gnu.so(+0x1695b) [0x7fd1136f095b]

    GPU:A100-PCIE DGL版本:dgl-cu111-0.8a211008 看起来可以获得logits,但是无法进行反向传播

    很奇怪的是运行IMDB4GTN数据集时没有任何问题 使用MHNF运行ACM4GTN也报了同样的错误

    我看到GTN有两个,一个是叫GTN_spare.py一个GTN.py,默认是用的GTN_spare。用GTN.py可以运行ACM4GTN,但是准确率只有60%上下

    opened by a772316182 2
  • Help needed: Wanted behavior of Experiment.specific_trainerflow.get method and task/trainerflow registration

    Help needed: Wanted behavior of Experiment.specific_trainerflow.get method and task/trainerflow registration

    Hi, I am trying to create a new trainer flow, as well as a new task. I am struggling a bit and have a few questions: When I register them with @register_flow(str_flow) and @register_task(str_task), must str_taskand str_flowbe identical?
    Because as my flow is not specific to a model, it is not in the specific_trainerflowdictionnary defined in the Experiment class. So the line 92 in experiment.py( trainerflow = self.specific_trainerflow.get(self.config.model, self.config.task) ) returns the key of the task as the trainerflow_key. Is this the wanted behavior?

    Thanks!

    opened by Carayolj 0
  • run HGSL model error

    run HGSL model error

    🐛 Bug

    when i run the suggest command :

    python main.py -m HGSL -d acm4GTN -t node_classification -g 0 --use_best_config
    

    this raise an error like:

    Traceback (most recent call last): File "main.py", line 21, in experiment.run() File "/workspace/OpenHGNN/openhgnn/experiment.py", line 97, in run flow = build_flow(self.config, trainerflow) File "/workspace/OpenHGNN/openhgnn/trainerflow/init.py", line 46, in build_flow return FLOW_REGISTRYflow_name File "/workspace/OpenHGNN/openhgnn/trainerflow/node_classification.py", line 42, in init self.model = build_model(self.model).build_model_from_args(self.args, self.hg).to(self.device) File "/workspace/OpenHGNN/openhgnn/models/HGSL.py", line 106, in build_model_from_args mp_emb_dim = hg.nodes["paper"].data["pap_m2v_emb"].shape[1] File "/opt/conda/lib/python3.7/site-packages/dgl/view.py", line 73, in getitem return self._graph._get_n_repr(self._ntid, self._nodes)[key] File "/opt/conda/lib/python3.7/site-packages/dgl/frame.py", line 622, in getitem return self._columns[name].data KeyError: 'pap_m2v_emb'

    it seems like there is no pap_m2v_emb key in paper nodes data, so how to fix it?


    more error update: when I just make mp_emb_dim=0 to jump this line, more errors raise, such as no hidden_dimmini_batch_flag ... defined in config, besides, when I successfully run this model, another exception was raised:

    image

    Do you have an updated version of the model?

    Sincere thanks.

    To Reproduce

    Steps to reproduce the behavior:

    1.cd OpenHGNN 2.python main.py -m HGSL -d acm4GTN -t node_classification -g 0 --use_best_config

    Expected behavior

    Environment

    • OpenHGNN Version (e.g., 1.0):
    • PyTorch latest, DGL latest
    • Linux
    • python main.py -m HGSL -d acm4GTN -t node_classification -g 0 --use_best_config
    • best_config for recommend
    opened by vchopin 1
  • How to train model using own dataset?

    How to train model using own dataset?

    ❓ Questions and Help

    I want to train my own HNN data, could you tell me how to edit this code? the data in ./openhgnn/dataset are download from https://s3.cn-north-1.amazonaws.com.cn/dgl-data/ and is .bin file. So how could I change this dataset? 救救孩子

    opened by Fino2020 1
  • [DHNE]

    [DHNE]

    Description

    Checklist

    Please feel free to remove inapplicable items for your PR.

    • [x] The PR title starts with [$CATEGORY] (such as [NN], [Model], [Doc], [Feature]])
    • [x] Changes are complete (i.e. I finished coding on this PR)
    • [x] All changes have test coverage
    • [x] Code is well-documented
    • [x] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
    • [x] Related issue is referred in this PR
    • [x] If the PR is for a new model/paper, I've updated the example index here.

    Changes

    opened by Vera-200 0
  • [Model]Mg2vec

    [Model]Mg2vec

    Description

    Add the Mg2vec Model and add the EdgeClassification Task

    Checklist

    Please feel free to remove inapplicable items for your PR.

    • [ ] The PR title starts with [$CATEGORY] (such as [NN], [Model], [Doc], [Feature]])
    • [ ] Changes are complete (i.e. I finished coding on this PR)
    • [ ] All changes have test coverage
    • [ ] Code is well-documented
    • [ ] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
    • [ ] Related issue is referred in this PR
    • [ ] If the PR is for a new model/paper, I've updated the example index here.

    Changes

    • [ ] Add configs for Mg2vec in config.ini and config.py
    • [ ] Add Mg2vec.py, which contains the model part
    • [ ] Add mg2vec_sampler.py for reading data
    • [ ] Add mg2vec_trainer.py for training
    • [ ] Add EdgeClassificationDataset.py for EdgeClassification Task, which is a modified version of NodeClassificationDataset.py
    • [ ] Add mg2vec_dataset.py for download/read mg2vec dataset
    • [ ] Add edge_classification.py, which is a modified version of node_classification.py
    • [ ] Add ec_with_SVC function in evaluator.py for edge_classification task
    • [ ] Add readme.md for Mg2vec model
    • [ ] Modify corresponding init.py and experiment.py
    opened by null-xyj 0
Releases(v0.3.0)
Owner
BUPT GAMMA Lab
Graph dAta Mining and MAchine learning Lab at Beijing University of Posts and Telecommunications
BUPT GAMMA Lab
A large dataset of 100k Google Satellite and matching Map images, resembling pix2pix's Google Maps dataset.

Larger Google Sat2Map dataset This dataset extends the aerial ⟷ Maps dataset used in pix2pix (Isola et al., CVPR17). The provide script download_sat2m

34 Dec 28, 2022
Deep Reinforced Attention Regression for Partial Sketch Based Image Retrieval.

DARP-SBIR Intro This repository contains the source code implementation for ICDM submission paper Deep Reinforced Attention Regression for Partial Ske

2 Jan 09, 2022
Problem-943.-ACMP - Problem 943. ACMP

Problem-943.-ACMP В "main.py" расположен вариант моего решения задачи 943 с серв

Konstantin Dyomshin 2 Aug 19, 2022
Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning

Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning This is the code for implementing the MADDPG algorithm presented in

97 Dec 21, 2022
Implementation of the GBST block from the Charformer paper, in Pytorch

Charformer - Pytorch Implementation of the GBST (gradient-based subword tokenization) module from the Charformer paper, in Pytorch. The paper proposes

Phil Wang 105 Dec 26, 2022
UAV-Networks-Routing is a Python simulator for experimenting routing algorithms and mac protocols on unmanned aerial vehicle networks.

UAV-Networks Simulator - Autonomous Networking - A.A. 20/21 UAV-Networks-Routing is a Python simulator for experimenting routing algorithms and mac pr

0 Nov 13, 2021
A map update dataset and benchmark

MUNO21 MUNO21 is a dataset and benchmark for machine learning methods that automatically update and maintain digital street map datasets. Previous dat

16 Nov 30, 2022
Social Fabric: Tubelet Compositions for Video Relation Detection

Social-Fabric Social Fabric: Tubelet Compositions for Video Relation Detection This repository contains the code and results for the following paper:

Shuo Chen 7 Aug 09, 2022
Neural networks applied in recognizing guitar chords using python, AutoML.NET with C# and .NET Core

Chord Recognition Demo application The demo application is written in C# with .NETCore. As of July 9, 2020, the only version available is for windows

Andres Mauricio Rondon Patiño 24 Oct 22, 2022
Code for our paper "MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction" published at ICCV 2021.

MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction This repository contains the code for the p

Sven 30 Jan 05, 2023
Spatial color quantization in Rust

rscolorq Rust port of Derrick Coetzee's scolorq, based on the 1998 paper "On spatial quantization of color images" by Jan Puzicha, Markus Held, Jens K

Collyn O'Kane 37 Dec 22, 2022
Feup-csr - Repository holding my group's submission to the CSR project competition

CSR Competições de Swarm Robotics Swarm Robotics Competitions This repository holds the files submitted for the CSR project competition. Project group

Nuno Pereira 1 Jan 04, 2022
Research on controller area network Intrusion Detection Systems

Group members information Member 1: Lixue Liang Member 2: Yuet Lee Chan Member 3: Xinruo Zhang Member 4: Yifei Han User Manual Generate Attack Packets

Roche 4 Aug 30, 2022
Implementation of Barlow Twins paper

barlowtwins PyTorch Implementation of Barlow Twins paper: Barlow Twins: Self-Supervised Learning via Redundancy Reduction This is currently a work in

IgorSusmelj 86 Dec 20, 2022
Reinforcement Learning for finance

Reinforcement Learning for Finance We apply reinforcement learning for stock trading. Fetch Data Example import utils # fetch symbols from yahoo fina

Tomoaki Fujii 159 Jan 03, 2023
Basics of 2D and 3D Human Pose Estimation.

Human Pose Estimation 101 If you want a slightly more rigorous tutorial and understand the basics of Human Pose Estimation and how the field has evolv

Sudharshan Chandra Babu 293 Dec 14, 2022
Implementations of paper Controlling Directions Orthogonal to a Classifier

Classifier Orthogonalization Implementations of paper Controlling Directions Orthogonal to a Classifier , ICLR 2022, Yilun Xu, Hao He, Tianxiao Shen,

Yilun Xu 33 Dec 01, 2022
IA for recognising Traffic Signs using Keras [Tensorflow]

Traffic Signs Recognition ⚠️ 🚦 Fundamentals of Intelligent Systems Introduction 📄 Development of a neural network capable of recognizing nine differ

Sebastián Fernández García 2 Dec 19, 2022
pybaum provides tools to work with pytrees which is a concept burrowed from JAX.

pybaum provides tools to work with pytrees which is a concept burrowed from JAX.

Open Source Economics 9 May 11, 2022
The implementation of "Bootstrapping Semantic Segmentation with Regional Contrast".

ReCo - Regional Contrast This repository contains the source code of ReCo and baselines from the paper, Bootstrapping Semantic Segmentation with Regio

Shikun Liu 128 Dec 30, 2022