MetaTTE: a Meta-Learning Based Travel Time Estimation Model for Multi-city Scenarios

Related tags

Deep LearningMetaTTE
Overview

MetaTTE: a Meta-Learning Based Travel Time Estimation Model for Multi-city Scenarios

This is the official TensorFlow implementation of MetaTTE in the manuscript.

Core Requirements

  • tensorflow~=2.3.0
  • numpy~=1.18.4
  • spektral~=0.6.1
  • pandas~=1.0.3
  • tqdm~=4.46.0
  • opencv-python~=4.3.0.36
  • matplotlib~=3.2.1
  • Pillow~=7.1.2
  • scipy~=1.4.1

All Dependencies can be installed using the following command:

pip install -r requirements.txt

Data Preparation

We here provide the datasets we adopted in this paper with Google Drive. After downloading the zip file, please extract all the files in data directory to the data folder in this project.

Download Link: Download

Configuration

We here list a sample of our config file, and leave the comments for explanation. \ (Please DO NOT include the comments in config files)

[General]
mode = train
# Specify the absoulute path of training, validation and testing files
train_files = ./data/chengdu/train.npy,./data/porto/train.npy
val_files = ./data/chengdu/val.npy,./data/porto/val.npy
test_files = ./data/chengdu/test.npy,./data/porto/test.npy
# Specify the batch size
batch_size = 32
# Specify the number for GPU
gpu = 7
# Specify the unique label for each experiment
prefix = tte_exp_64_gru

[Model]
# Specify the inner learning rate
learning_rate = 1e-2
# Specify the inner reduce rate of learning rate
lr_reduce = 0.5
# Specify the maximum iteration
epoch = 500000
# Specify the k shot
inner_k = 10
# Specify the outer step size
outer_step_size = 0.1
# Specify the model according to the class name
model = MSMTTEGRUAttModel
# Specify the dataset according to the class name
dataset = MyDifferDatasetWithEmbedding
# Specify the dataloader according to the class name
dataloader = MyDataLoaderWithEmbedding


# mean, standard deviation for latitudes, longitudes and travel time (Chengdu is before the comma while Porto is after the comma)
[Statistics]
lat_means = 30.651168872309235,41.16060653954797
lng_means = 104.06000501543934,-8.61946359614912
lat_stds = 0.039222931811691585,0.02315827641949562
lng_stds = 0.045337940910596744,0.029208656457667292
labels_means = 1088.0075248390972,691.2889878452086
labels_stds = 1315.707363003298,347.4765869900725

Model Training

Here are commands for training the model on both Chengdu and Porto tasks.

python main.py --config=./experiments/finetuning/64/gru.conf

Eval baseline methods

Here are commands for testing the model on both Chengdu and Porto tasks.

python main.py --config=./experiments/finetuning/64/gru.conf

Citation

We currently do not provide citations.

Owner
morningstarwang
Research assistant in ICT, P.h.D candidate in BUPT, Consultant in HBY, and Advisor in Path Academics.
morningstarwang
SAS output to EXCEL converter for Cornell/MIT Language and acquisition lab

CORNELLSASLAB SAS output to EXCEL converter for Cornell/MIT Language and acquisition lab Instructions: This python code can be used to convert SAS out

2 Jan 26, 2022
UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering

UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering This repository holds all the code and data for our recent work on

Mohamed El Banani 118 Dec 06, 2022
CNN Based Meta-Learning for Noisy Image Classification and Template Matching

CNN Based Meta-Learning for Noisy Image Classification and Template Matching Introduction This master thesis used a few-shot meta learning approach to

Kumar Manas 2 Dec 09, 2021
Supporting code for short YouTube series Neural Networks Demystified.

Neural Networks Demystified Supporting iPython notebooks for the YouTube Series Neural Networks Demystified. I've included formulas, code, and the tex

Stephen 1.3k Dec 23, 2022
Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch

Cross Transformers - Pytorch (wip) Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch Install $ pip install cross-t

Phil Wang 40 Dec 22, 2022
An end-to-end machine learning library to directly optimize AUC loss

LibAUC An end-to-end machine learning library for AUC optimization. Why LibAUC? Deep AUC Maximization (DAM) is a paradigm for learning a deep neural n

Andrew 75 Dec 12, 2022
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 07, 2023
Implementation of ICCV21 paper: PnP-DETR: Towards Efficient Visual Analysis with Transformers

Implementation of ICCV 2021 paper: PnP-DETR: Towards Efficient Visual Analysis with Transformers arxiv This repository is based on detr Recently, DETR

twang 113 Dec 27, 2022
A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis

WaveGlow A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis Quick Start: Install requirements: pip install

Yuchao Zhang 204 Jul 14, 2022
Recurrent Scale Approximation (RSA) for Object Detection

Recurrent Scale Approximation (RSA) for Object Detection Codebase for Recurrent Scale Approximation for Object Detection in CNN published at ICCV 2017

Yu Liu (Louis) 239 Dec 28, 2022
A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks)

A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks) This repository contains a PyTorch implementation for the paper: Deep Pyra

Greg Dongyoon Han 262 Jan 03, 2023
Code for `BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery`, Neurips 2021

This folder contains the code for 'Scalable Variational Approaches for Bayesian Causal Discovery'. Installation To install, use conda with conda env c

14 Sep 21, 2022
A collection of models for image<->text generation in ACM MM 2021.

Bi-directional Image and Text Generation UMT-BITG (image & text generator) Unifying Multimodal Transformer for Bi-directional Image and Text Generatio

Multimedia Research 63 Oct 30, 2022
To model the probability of a soccer coach leave his/her team during Campeonato Brasileiro for 10 chosen teams and considering years 2018, 2019 and 2020.

To model the probability of a soccer coach leave his/her team during Campeonato Brasileiro for 10 chosen teams and considering years 2018, 2019 and 2020.

Larissa Sayuri Futino Castro dos Santos 1 Jan 20, 2022
Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation

A Theoretical Analysis of the Repetition Problem in Text Generation This repository share the code for the paper "A Theoretical Analysis of the Repeti

Zihao Fu 37 Nov 21, 2022
Code from Daniel Lemire, A Better Alternative to Piecewise Linear Time Series Segmentation

PiecewiseLinearTimeSeriesApproximation code from Daniel Lemire, A Better Alternative to Piecewise Linear Time Series Segmentation, SIAM Data Mining 20

Daniel Lemire 21 Oct 27, 2022
Imagededup - 😎 Finding duplicate images made easy

imagededup is a python package that simplifies the task of finding exact and near duplicates in an image collection.

idealo 4.3k Jan 07, 2023
Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation

Implicit Internal Video Inpainting Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation paper | project

202 Dec 30, 2022
Real life contra a deep learning project built using mediapipe and openc

real-life-contra Description A python script that translates the body movement into in game control. Welcome to all new real life contra a deep learni

Programminghut 7 Jan 26, 2022