RMNA: A Neighbor Aggregation-Based Knowledge Graph Representation Learning Model Using Rule Mining

Related tags

Deep LearningRMNA
Overview

RMNA: A Neighbor Aggregation-Based Knowledge Graph Representation Learning Model Using Rule Mining

Our code is based on Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs

This README is also based on it.

This repository contains a Pytorch implementation of RMNA. We use AMIE to obtains horn rules. RMNA is a hierarchical neighbor aggregation model, which transforms valuable multi-hop neighbors into one-hop neighbors that are semantically similar to the corresponding multi-hop neighbors, so that the completeness of multi-hop neighbors can be ensured.

Requirements

Please download miniconda from above link and create an environment using the following command:

    conda env create -f pytorch35.yml

Activate the environment before executing the program as follows:

    source activate pytorch35

Dataset

We used two datasets for evaluating our model. All the datasets and their folder names are given below.

  • Freebase: FB15k-237
  • Wordnet: WN18RR

Rule Mining and Filtering

In the AMINE+ folder, we can generate mining rules by using the following command:

    java -jar amie_plus.jar [TSV file]

Without additional arguments AMIE+ thresholds using PCA confidence 0.1 and head coverage 0.01. You can change these default settings. See AMIE. The available files generated and processed by AMIE are placed in the folder of the corresponding dataset named new_triple.

Training

Parameters:

--data: Specify the folder name of the dataset.

--epochs_gat: Number of epochs for gat training.

--epochs_conv: Number of epochs for convolution training.

--lr: Initial learning rate.

--weight_decay_gat: L2 reglarization for gat.

--weight_decay_conv: L2 reglarization for conv.

--get_2hop: Get a pickle object of 2 hop neighbors.

--use_2hop: Use 2 hop neighbors for training.

--partial_2hop: Use only 1 2-hop neighbor per node for training.

--output_folder: Path of output folder for saving models.

--batch_size_gat: Batch size for gat model.

--valid_invalid_ratio_gat: Ratio of valid to invalid triples for GAT training.

--drop_gat: Dropout probability for attention layer.

--alpha: LeakyRelu alphas for attention layer.

--nhead_GAT: Number of heads for multihead attention.

--margin: Margin used in hinge loss.

--batch_size_conv: Batch size for convolution model.

--alpha_conv: LeakyRelu alphas for conv layer.

--valid_invalid_ratio_conv: Ratio of valid to invalid triples for conv training.

--out_channels: Number of output channels in conv layer.

--drop_conv: Dropout probability for conv layer.

How to run

When running for first time, run preparation script with:

    $ sh prepare.sh
  • Freebase

      $ python3 main.py --data ./data/FB15k-237/ --epochs_gat 2000 --epochs_conv 150  --get_2hop True --partial_2hop True --batch_size_gat 272115 --margin 1 --out_channels 50 --drop_conv 0.3 --output_folder ./checkpoints/fb/out/
    
  • Wordnet

      $ python3 main.py --data ./data/WN18RR/--epochs_gat 3600 --epochs_conv 150 --get_2hop True --partial_2hop True
    
Owner
宋朝都
宋朝都
The Balloon Learning Environment - flying stratospheric balloons with deep reinforcement learning.

Balloon Learning Environment Docs The Balloon Learning Environment (BLE) is a simulator for stratospheric balloons. It is designed as a benchmark envi

Google 87 Dec 25, 2022
Film review classification

Film review classification Решение задачи классификации отзывов на фильмы на положительные и отрицательные с помощью рекуррентных нейронных сетей 1. З

Nikita Dukin 3 Jan 21, 2022
This is a template for the Non-autoregressive Deep Learning-Based TTS model (in PyTorch).

Non-autoregressive Deep Learning-Based TTS Template This is a template for the Non-autoregressive TTS model. It contains Data Preprocessing Pipeline D

Keon Lee 13 Dec 05, 2022
Main Results on ImageNet with Pretrained Models

This repository contains Pytorch evaluation code, training code and pretrained models for the following projects: SPACH (A Battle of Network Structure

Microsoft 151 Dec 14, 2022
Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU A Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/abs/211

Fuhang 5 Jan 18, 2022
Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter

ACE Please find the preliminary version published at BMVC 2020 in the folder BMVC_version, and its extended journal version in Journal_version. Datase

28 Dec 25, 2022
Using fully convolutional networks for semantic segmentation with caffe for the cityscapes dataset

Using fully convolutional networks for semantic segmentation (Shelhamer et al.) with caffe for the cityscapes dataset How to get started Download the

Simon Guist 27 Jun 06, 2022
Rohit Ingole 2 Mar 24, 2022
unofficial pytorch implementation of RefineGAN

RefineGAN unofficial pytorch implementation of RefineGAN (https://arxiv.org/abs/1709.00753) for CSMRI reconstruction, the official code using tensorpa

xinby17 5 Jul 21, 2022
Learning Open-World Object Proposals without Learning to Classify

Learning Open-World Object Proposals without Learning to Classify Pytorch implementation for "Learning Open-World Object Proposals without Learning to

Dahun Kim 149 Dec 22, 2022
Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"

Deformable Attention Implementation of Deformable Attention from this paper in Pytorch, which appears to be an improvement to what was proposed in DET

Phil Wang 128 Dec 24, 2022
Investigating automatic navigation towards standard US views integrating MARL with the virtual US environment developed in CT2US simulation

AutomaticUSnavigation Investigating automatic navigation towards standard US views integrating MARL with the virtual US environment developed in CT2US

Cesare Magnetti 6 Dec 05, 2022
Pytorch implementation of Feature Pyramid Network (FPN) for Object Detection

fpn.pytorch Pytorch implementation of Feature Pyramid Network (FPN) for Object Detection Introduction This project inherits the property of our pytorc

Jianwei Yang 912 Dec 21, 2022
Anchor-free Oriented Proposal Generator for Object Detection

Anchor-free Oriented Proposal Generator for Object Detection Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, Junwei Han, Intro

jbwang1997 56 Nov 15, 2022
A Simple and Versatile Framework for Object Detection and Instance Recognition

SimpleDet - A Simple and Versatile Framework for Object Detection and Instance Recognition Major Features FP16 training for memory saving and up to 2.

TuSimple 3k Dec 12, 2022
GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles using deep neural networks.

GndNet: Fast Ground plane Estimation and Point Cloud Segmentation for Autonomous Vehicles. Authors: Anshul Paigwar, Ozgur Erkent, David Sierra Gonzale

Anshul Paigwar 114 Dec 29, 2022
Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Gyeongjae Choi 17 Sep 23, 2021
基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

37 Jan 01, 2023
Data, model training, and evaluation code for "PubTables-1M: Towards a universal dataset and metrics for training and evaluating table extraction models".

PubTables-1M This repository contains training and evaluation code for the paper "PubTables-1M: Towards a universal dataset and metrics for training a

Microsoft 365 Jan 04, 2023
tensorflow implementation of 'YOLO : Real-Time Object Detection'

YOLO_tensorflow (Version 0.3, Last updated :2017.02.21) 1.Introduction This is tensorflow implementation of the YOLO:Real-Time Object Detection It can

Jinyoung Choi 1.7k Nov 21, 2022