MetaTTE: a Meta-Learning Based Travel Time Estimation Model for Multi-city Scenarios

Related tags

Deep LearningMetaTTE
Overview

MetaTTE: a Meta-Learning Based Travel Time Estimation Model for Multi-city Scenarios

This is the official TensorFlow implementation of MetaTTE in the manuscript.

Core Requirements

  • tensorflow~=2.3.0
  • numpy~=1.18.4
  • spektral~=0.6.1
  • pandas~=1.0.3
  • tqdm~=4.46.0
  • opencv-python~=4.3.0.36
  • matplotlib~=3.2.1
  • Pillow~=7.1.2
  • scipy~=1.4.1

All Dependencies can be installed using the following command:

pip install -r requirements.txt

Data Preparation

We here provide the datasets we adopted in this paper with Google Drive. After downloading the zip file, please extract all the files in data directory to the data folder in this project.

Download Link: Download

Configuration

We here list a sample of our config file, and leave the comments for explanation. \ (Please DO NOT include the comments in config files)

[General]
mode = train
# Specify the absoulute path of training, validation and testing files
train_files = ./data/chengdu/train.npy,./data/porto/train.npy
val_files = ./data/chengdu/val.npy,./data/porto/val.npy
test_files = ./data/chengdu/test.npy,./data/porto/test.npy
# Specify the batch size
batch_size = 32
# Specify the number for GPU
gpu = 7
# Specify the unique label for each experiment
prefix = tte_exp_64_gru

[Model]
# Specify the inner learning rate
learning_rate = 1e-2
# Specify the inner reduce rate of learning rate
lr_reduce = 0.5
# Specify the maximum iteration
epoch = 500000
# Specify the k shot
inner_k = 10
# Specify the outer step size
outer_step_size = 0.1
# Specify the model according to the class name
model = MSMTTEGRUAttModel
# Specify the dataset according to the class name
dataset = MyDifferDatasetWithEmbedding
# Specify the dataloader according to the class name
dataloader = MyDataLoaderWithEmbedding


# mean, standard deviation for latitudes, longitudes and travel time (Chengdu is before the comma while Porto is after the comma)
[Statistics]
lat_means = 30.651168872309235,41.16060653954797
lng_means = 104.06000501543934,-8.61946359614912
lat_stds = 0.039222931811691585,0.02315827641949562
lng_stds = 0.045337940910596744,0.029208656457667292
labels_means = 1088.0075248390972,691.2889878452086
labels_stds = 1315.707363003298,347.4765869900725

Model Training

Here are commands for training the model on both Chengdu and Porto tasks.

python main.py --config=./experiments/finetuning/64/gru.conf

Eval baseline methods

Here are commands for testing the model on both Chengdu and Porto tasks.

python main.py --config=./experiments/finetuning/64/gru.conf

Citation

We currently do not provide citations.

Owner
morningstarwang
Research assistant in ICT, P.h.D candidate in BUPT, Consultant in HBY, and Advisor in Path Academics.
morningstarwang
Clustering is a popular approach to detect patterns in unlabeled data

Visual Clustering Clustering is a popular approach to detect patterns in unlabeled data. Existing clustering methods typically treat samples in a data

Tarek Naous 24 Nov 11, 2022
CSAC - Collaborative Semantic Aggregation and Calibration for Separated Domain Generalization

CSAC Introduction This repository contains the implementation code for paper: Co

ScottYuan 5 Jul 22, 2022
My implementation of transformers related papers for computer vision in pytorch

vision_transformers This is my personnal repo to implement new transofrmers based and other computer vision DL models I am currenlty working without a

samsja 1 Nov 10, 2021
Implementation of the paper "Shapley Explanation Networks"

Shapley Explanation Networks Implementation of the paper "Shapley Explanation Networks" at ICLR 2021. Note that this repo heavily uses the experimenta

68 Dec 27, 2022
Code for NAACL 2021 full paper "Efficient Attentions for Long Document Summarization"

LongDocSum Code for NAACL 2021 paper "Efficient Attentions for Long Document Summarization" This repository contains data and models needed to reprodu

56 Jan 02, 2023
[SIGGRAPH 2020] Attribute2Font: Creating Fonts You Want From Attributes

Attr2Font Introduction This is the official PyTorch implementation of the Attribute2Font: Creating Fonts You Want From Attributes. Paper: arXiv | Rese

Yue Gao 200 Dec 15, 2022
NLG evaluation via Statistical Measures of Similarity: BaryScore, DepthScore, InfoLM

NLG evaluation via Statistical Measures of Similarity: BaryScore, DepthScore, InfoLM Automatic Evaluation Metric described in the papers BaryScore (EM

Pierre Colombo 28 Dec 28, 2022
Unofficial Alias-Free GAN implementation. Based on rosinality's version with expanded training and inference options.

Alias-Free GAN An unofficial version of Alias-Free Generative Adversarial Networks (https://arxiv.org/abs/2106.12423). This repository was heavily bas

dusk (they/them) 75 Dec 12, 2022
D²Conv3D: Dynamic Dilated Convolutions for Object Segmentation in Videos

D²Conv3D: Dynamic Dilated Convolutions for Object Segmentation in Videos This repository contains the implementation for "D²Conv3D: Dynamic Dilated Co

17 Oct 20, 2022
TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, CVPR2022

TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation Paper Links: TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentati

Hust Visual Learning Team 253 Dec 21, 2022
DeepHawkeye is a library to detect unusual patterns in images using features from pretrained neural networks

English | 简体中文 Introduction DeepHawkeye is a library to detect unusual patterns in images using features from pretrained neural networks Reference Pat

CV Newbie 28 Dec 13, 2022
Code for "MetaMorph: Learning Universal Controllers with Transformers", Gupta et al, ICLR 2022

MetaMorph: Learning Universal Controllers with Transformers This is the code for the paper MetaMorph: Learning Universal Controllers with Transformers

Agrim Gupta 50 Jan 03, 2023
Official repository for Natural Image Matting via Guided Contextual Attention

GCA-Matting: Natural Image Matting via Guided Contextual Attention The source codes and models of Natural Image Matting via Guided Contextual Attentio

Li Yaoyi 349 Dec 26, 2022
How to train a CNN to 99% accuracy on MNIST in less than a second on a laptop

Training a NN to 99% accuracy on MNIST in 0.76 seconds A quick study on how fast you can reach 99% accuracy on MNIST with a single laptop. Our answer

Tuomas Oikarinen 42 Dec 10, 2022
This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework

neon_course This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework. For more information, see

Nervana 92 Jan 03, 2023
This repository is the official implementation of Using Time-Series Privileged Information for Provably Efficient Learning of Prediction Models

Using Time-Series Privileged Information for Provably Efficient Learning of Prediction Models Link to paper Abstract We study prediction of future out

Rickard Karlsson 2 Aug 19, 2022
Aalto-cs-msc-theses - Listing of M.Sc. Theses of the Department of Computer Science at Aalto University

Aalto-CS-MSc-Theses Listing of M.Sc. Theses of the Department of Computer Scienc

Jorma Laaksonen 3 Jan 27, 2022
Collection of in-progress libraries for entity neural networks.

ENN Incubator Collection of in-progress libraries for entity neural networks: Neural Network Architectures for Structured State Entity Gym: Abstractio

25 Dec 01, 2022
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

dddzg 49 Jan 02, 2023
Implementation of the "PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences" paper.

PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences Introduction Point cloud sequences are irregular and unordered in the spatial dimen

Hehe Fan 63 Dec 09, 2022