RMNA: A Neighbor Aggregation-Based Knowledge Graph Representation Learning Model Using Rule Mining

Related tags

Deep LearningRMNA
Overview

RMNA: A Neighbor Aggregation-Based Knowledge Graph Representation Learning Model Using Rule Mining

Our code is based on Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs

This README is also based on it.

This repository contains a Pytorch implementation of RMNA. We use AMIE to obtains horn rules. RMNA is a hierarchical neighbor aggregation model, which transforms valuable multi-hop neighbors into one-hop neighbors that are semantically similar to the corresponding multi-hop neighbors, so that the completeness of multi-hop neighbors can be ensured.

Requirements

Please download miniconda from above link and create an environment using the following command:

    conda env create -f pytorch35.yml

Activate the environment before executing the program as follows:

    source activate pytorch35

Dataset

We used two datasets for evaluating our model. All the datasets and their folder names are given below.

  • Freebase: FB15k-237
  • Wordnet: WN18RR

Rule Mining and Filtering

In the AMINE+ folder, we can generate mining rules by using the following command:

    java -jar amie_plus.jar [TSV file]

Without additional arguments AMIE+ thresholds using PCA confidence 0.1 and head coverage 0.01. You can change these default settings. See AMIE. The available files generated and processed by AMIE are placed in the folder of the corresponding dataset named new_triple.

Training

Parameters:

--data: Specify the folder name of the dataset.

--epochs_gat: Number of epochs for gat training.

--epochs_conv: Number of epochs for convolution training.

--lr: Initial learning rate.

--weight_decay_gat: L2 reglarization for gat.

--weight_decay_conv: L2 reglarization for conv.

--get_2hop: Get a pickle object of 2 hop neighbors.

--use_2hop: Use 2 hop neighbors for training.

--partial_2hop: Use only 1 2-hop neighbor per node for training.

--output_folder: Path of output folder for saving models.

--batch_size_gat: Batch size for gat model.

--valid_invalid_ratio_gat: Ratio of valid to invalid triples for GAT training.

--drop_gat: Dropout probability for attention layer.

--alpha: LeakyRelu alphas for attention layer.

--nhead_GAT: Number of heads for multihead attention.

--margin: Margin used in hinge loss.

--batch_size_conv: Batch size for convolution model.

--alpha_conv: LeakyRelu alphas for conv layer.

--valid_invalid_ratio_conv: Ratio of valid to invalid triples for conv training.

--out_channels: Number of output channels in conv layer.

--drop_conv: Dropout probability for conv layer.

How to run

When running for first time, run preparation script with:

    $ sh prepare.sh
  • Freebase

      $ python3 main.py --data ./data/FB15k-237/ --epochs_gat 2000 --epochs_conv 150  --get_2hop True --partial_2hop True --batch_size_gat 272115 --margin 1 --out_channels 50 --drop_conv 0.3 --output_folder ./checkpoints/fb/out/
    
  • Wordnet

      $ python3 main.py --data ./data/WN18RR/--epochs_gat 3600 --epochs_conv 150 --get_2hop True --partial_2hop True
    
Owner
宋朝都
宋朝都
hipCaffe: the HIP port of Caffe

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Cent

ROCm Software Platform 126 Dec 05, 2022
Benchmark for the generalization of 3D machine learning models across different remeshing/samplings of a surface.

Discretization Robust Correspondence Benchmark One challenge of machine learning on 3D surfaces is that there are many different representations/sampl

Nicholas Sharp 10 Sep 30, 2022
On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation (Findings of EMNLP 2021))

PTvsBT On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation (Findings of EMNLP 2021) Citation Please cite a

Sunbow Liu 10 Nov 25, 2022
A ssl analyzer which could analyzer target domain's certificate.

ssl_analyzer A ssl analyzer which could analyzer target domain's certificate. Analyze the domain name ssl certificate information according to the inp

vincent 17 Dec 12, 2022
StyleGAN of All Trades: Image Manipulation withOnly Pretrained StyleGAN

StyleGAN of All Trades: Image Manipulation withOnly Pretrained StyleGAN This is the PyTorch implementation of StyleGAN of All Trades: Image Manipulati

360 Dec 28, 2022
This is the source code for generating the ASL-Skeleton3D and ASL-Phono datasets. Check out the README.md for more details.

ASL-Skeleton3D and ASL-Phono Datasets Generator The ASL-Skeleton3D contains a representation based on mapping into the three-dimensional space the coo

Cleison Amorim 5 Nov 20, 2022
Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021)

PGpoints Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021) Hyeontae Son, Young Min Kim Pre

Hyeontae Son 9 Jun 06, 2022
Label Mask for Multi-label Classification

LM-MLC 一种基于完型填空的多标签分类算法 1 前言 本文主要介绍本人在全球人工智能技术创新大赛【赛道一】设计的一种基于完型填空(模板)的多标签分类算法:LM-MLC,该算法拟合能力很强能感知标签关联性,在多个数据集上测试表明该算法与主流算法无显著性差异,在该比赛数据集上的dev效果很好,但是由

52 Nov 20, 2022
Fashion Recommender System With Python

Fashion-Recommender-System Thr growing e-commerce industry presents us with a la

Omkar Gawade 2 Feb 02, 2022
A Keras implementation of CapsNet in the paper: Sara Sabour, Nicholas Frosst, Geoffrey E Hinton. Dynamic Routing Between Capsules

NOTE This implementation is fork of https://github.com/XifengGuo/CapsNet-Keras , applied to IMDB texts reviews dataset. CapsNet-Keras A Keras implemen

Lauro Moraes 5 Oct 23, 2022
Official Pytorch implementation of Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations

Scene Representation Networks This is the official implementation of the NeurIPS submission "Scene Representation Networks: Continuous 3D-Structure-Aw

Vincent Sitzmann 365 Jan 06, 2023
A data-driven approach to quantify the value of classifiers in a machine learning ensemble.

Documentation | External Resources | Research Paper Shapley is a Python library for evaluating binary classifiers in a machine learning ensemble. The

Benedek Rozemberczki 188 Dec 29, 2022
CAR-API: Cityscapes Attributes Recognition API

CAR-API: Cityscapes Attributes Recognition API This is the official api to download and fetch attributes annotations for Cityscapes Dataset. Content I

Kareem Metwaly 5 Dec 22, 2022
People log into different sites every day to get information and browse through these sites one by one

HyperLink People log into different sites every day to get information and browse through these sites one by one. And they are exposed to advertisemen

0 Feb 17, 2022
Code for reproducing key results in the paper "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets"

Status: Archive (code is provided as-is, no updates expected) InfoGAN Code for reproducing key results in the paper InfoGAN: Interpretable Representat

OpenAI 1k Dec 19, 2022
PyTorch implementations of Top-N recommendation, collaborative filtering recommenders.

PyTorch implementations of Top-N recommendation, collaborative filtering recommenders.

Yoonki Jeong 129 Dec 22, 2022
Train a state-of-the-art yolov3 object detector from scratch!

TrainYourOwnYOLO: Building a Custom Object Detector from Scratch This repo let's you train a custom image detector using the state-of-the-art YOLOv3 c

AntonMu 616 Jan 08, 2023
Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"

pair-emnlp2020 Official repository for the paper: Xinyu Hua and Lu Wang: PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long

Xinyu Hua 31 Oct 13, 2022
A PyTorch implementation of EfficientDet.

A PyTorch impl of EfficientDet faithful to the original Google impl w/ ported weights

Ross Wightman 1.4k Jan 07, 2023
A PyTorch Implementation of SphereFace.

SphereFace A PyTorch Implementation of SphereFace. The code can be trained on CASIA-Webface and the best accuracy on LFW is 99.22%. SphereFace: Deep H

carwin 685 Dec 09, 2022