Pytorch implementation of CVPR2020 paper “VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation”

Overview

VectorNet Re-implementation

This is the unofficial pytorch implementation of CVPR2020 paper "VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation". (And it's a part of test of the summer camp 2020 organized by IIIS, Tsinghua University.)

  1. 运行环境

    python 3.7, Pytorch1.1.0, torchvision0.3.0, cuda9.0

  2. 文件说明

    ----- VectorNet

    +--- ArgoverseDataset.py 数据集读取、预处理、转换为tensor

    +--- subgraph_net.py polyline subgraph相关类实现

    +--- gnn.py 带Attention机制的GCN,因为图是全连接,所以没有用dgl

    +--- vectornet.py 把subgraph和GNN合并起来的model,loss计算

    +--- train.py 网络训练入口,会保存checkpoint

    +--- test.py 网络测试入口,同时实现了评估函数,会保存inference结果

    +--- Visualization.ipynb 可视化vectorize的HD map

  3. 运行准备

    • 安装argoverse-api且按照说明,将HD map数据放置到指定位置
    • 下载forecast数据集,将train.py和test.py中cfg['data_locate']修改为解压位置
  4. 代码函数解读

    • ArgoverseDataset.py

      定义了类class ArgoverseForecastDataset(torch.utils.data.Dataset)

      • def __init__(self, cfg) 类初始化,主要步骤有

        self.axis_range = self.get_map_range(self.am) #用于normalize坐标
        self.city_halluc_bbox_table, self.city_halluc_tableidx_to_laneid_map = self.am.build_hallucinated_lane_bbox_index()
        self.vector_map, self.extra_map = self.generate_vector_map()

        调用argoverse api读取HD map数据,重点是generate_vector_map函数

      • def generate_vector_map(self) 读取HD map并转换成vector

        利用argoverse api的get_lane_segment_polygon(key, city_name) 获取道路边沿的采样点,以论文指定的vector的方式拼接,该api是得到polygon,而我们只要两个边沿,因此做了一些处理

        同时将相关semantic label获取,返回至extra_map,待后续组装进vector内

      • def __getitem__(self, index) 迭代获取数据函数,在该函数中读取了trajectory数据,同时对坐标进行了一系列预处理,最后转换为tensor

        获取trajectory同样利用argoverse api,数据预处理主要分为3个步骤

        (1)平移坐标使last_observe移到中心

        (2)rotate利用齐次坐标旋转矩阵实现,夹角利用向量内积获得

        (3)normalize这里通过线性变换把坐标normalize到一定范围,这里认为last_observe的位置就是数据集分布的中心,即 $$ x = \frac{x}{max-min} $$

      • __getitem__返回

             self.traj_feature, self.map_feature

        其中self.traj_feature 是$N\times feature$ 维的tensor指示轨迹polyline的vector集合 self.map_feature 是一个有三个key的dict, map_feature['PIT']和map_feature['MIA'] 是list,分别是两座城市道路的polyline的list,即list的每一个元素是一个$N\times feature$ 维的tensor,指示一条道路的polyline,map_feature['city_name']保存该trajectory所在的城市 def get_trajectory(self, index)generate_vector_map 类似,区别在于trajectory是针对timestamp进行轨迹拼接,同时需要将timestamp装入向量中作为semantic label的信息

    • subgraph_net.py

      定义了类class SubgraphNet(nn.Module) class SubgraphNet_Layer(nn.Module)

      • class SubgraphNet_Layer

        输入:$N\times feature$ 维的单polyline tensor

        输出:$N\times (feature+global\ feature)$ 维的单polyline tensor

        实现了单层的SubgraphNet,按照文章叙述,encoder是一个MLP,具体由一个全连接层、一个layer_norm 和一个RELU激发层组成,随后是max_pool提取全局信息,最后concatenate将信息整合,与Point R-CNN相似

      • class SubgraphNet

        输入:$N\times feature$ 维的单polyline tensor

        输出:$1\times (feature+global\ feature)$ 维的单polyline tensor

        3 层SubgraphNet_Layer组合,最后max_pool提取代表性信息

    • gnn.py

      定义了类class GraphAttentionNet(nn.Module)

      • class GraphAttentionNet

        输入:$K\times (feature+global\ feature)$ 维的全图特征信息

        输出:$K\times value\ dims$ 维的传播后全图特征信息

        因为在本论文中,将邻接矩阵定义为全连接矩阵,因此没有建图实现消息传播的必要性。Attention机制在本类中加以实现,公式即为 $$ GNN(P)=softmax(P_QP_K^T)P_V $$ 注意:这里进行的都是矩阵计算。$P_Q$是查询,$P_K$是key,$P_V$是值,softmax一步是获得各value的权重

        具体的实现参考了论文Attention is All you need

    • vectornet.py

      定义了类class VectorNet(nn.Module)

      • class VectorNet 本类的 forwardtrainevaluate 两种情况

        输入:trajectory_batch, mapfeature_batch

        输出:train时输出loss,evaluate时输出预测结果predictions和真值label

        • 由于不同道路的polyline采样点数不同,因此在dataset数据读取时把它放入了list中,因此在本类中会首先完成对数据的拆包
        • 然后构造两个SubgraphNet类,traj_subgraphnet,和map_subgraphnet将不同polyline的信息,都处理为$1\times (feature+global\ feature)$ 维的polyline信息,然后concatenate起来
        • 此后会进行L2 normalize以有效训练后面的GNN,正则化后直接传入GNN,并得到传播后的vector信息 $1\times value\ dims$ 维,decoder使用了MLP与subgraph_net参数相似,但多加了一层全连接网络以生成回归坐标
        • 如果是train则使用torch.nn.MSEloss计算损失,可以证明在误差服从标准高斯分布时,Gaussian Negative Likelihood Loss就是MSEloss,它们本质上是等价的。如果是evaluate则把prediction和label一起输出,在test.py中实现Average Displacement Error的计算
    • train.py

      网络训练入口

      • def main()

        首先初始化一些参数,为代码简便,这里把配置(cfg)直接编码在代码中,更合适的做法应是利用 argparse 通过命令行传入。然后实例化dataset,利用dataloader打包为minibatch,初始化model,设置优化器,和步长自调节器

        另外这里使用tensorboard可视化损失,文件保存在 ./run/文件夹下,因此需要初始化SummaryWriter

      • def do_train(model, cfg, train_loader, optimizer, scheduler, writer)

        较为常见的主训练循环,每 5 个epoch调节一次步长,每10个epoch保存一次模型参数,训练结束保存一次模型参数,输出每2个iteration(minibatch)输出一次信息,采用logger保存日志文件

    • test.py

      网络推断入口

      • def main()

        与train.py几乎相同,注意cfg['model_path']模型参数文件路径和cfg['save_path']推理结果存储路径两个参数

      • def inference(model, cfg, val_loader)

        较do_train有所简化,因为无需再处理vector_map数据,已经被编码进网络里(只使用了一层的GNN),将输出的result和label用list保存起来,调用evaluate()函数计算ADE指标

      • def evaluate(dataset, predictions, labels)

        传入dataset是因为需要把预处理过的数据,变换回原始坐标,即先反归一化,然后逆向旋转,最后平移,ADE loss即是预测点和真值点间欧氏距离的平均,inference的结果保存在路径cfg['save_path']下

  5. 一些可视化的结果(详见visualization.ipynb)

    • loss 收敛(150组数据,训练了25个epoch,adadelta优化器,有点过拟合) img1
      img2
    • baseline的结果(150组数据,训练了10个epoch,9步预测) img3
    • 地图矢量化
      img1
      img4
    • 轨迹预测(蓝色的是label,红色是预测,十字路口场景呈现回归现象)
      img2
Estimating Example Difficulty using Variance of Gradients

Estimating Example Difficulty using Variance of Gradients This repository contains source code necessary to reproduce some of the main results in the

Chirag Agarwal 48 Dec 26, 2022
A pre-trained model with multi-exit transformer architecture.

ElasticBERT This repository contains finetuning code and checkpoints for ElasticBERT. Towards Efficient NLP: A Standard Evaluation and A Strong Baseli

fastNLP 48 Dec 14, 2022
Repo for EchoVPR: Echo State Networks for Visual Place Recognition

EchoVPR Repo for EchoVPR: Echo State Networks for Visual Place Recognition Currently under development Dirs: data: pre-collected hidden representation

Anil Ozdemir 4 Oct 04, 2022
Neural-fractal - Create Fractals Using Complex-Valued Neural Networks!

Neural Fractal Create Fractals Using Complex-Valued Neural Networks! Home Page Features Define Dynamical Systems Using Complex-Valued Neural Networks

Amirabbas Asadi 10 Dec 17, 2022
《A-CNN: Annularly Convolutional Neural Networks on Point Clouds》(2019)

A-CNN: Annularly Convolutional Neural Networks on Point Clouds Created by Artem Komarichev, Zichun Zhong, Jing Hua from Department of Computer Science

Artёm Komarichev 44 Feb 24, 2022
Implementation for "Domain-Specific Bias Filtering for Single Labeled Domain Generalization"

DSBF Introduction This repository contains the implementation code for paper: Domain-Specific Bias Filtering for Single Labeled Domain Generalization

ScottYuan 7 Jan 05, 2023
Learning Skeletal Articulations with Neural Blend Shapes

This repository provides an end-to-end library for automatic character rigging and blend shapes generation as well as a visualization tool. It is based on our work Learning Skeletal Articulations wit

Peizhuo 504 Dec 30, 2022
Vehicles Counting using YOLOv4 + DeepSORT + Flask + Ngrok

A project for counting vehicles using YOLOv4 + DeepSORT + Flask + Ngrok

Duong Tran Thanh 37 Dec 16, 2022
ScriptProfilerPy - Module to visualize where your python script is slow

ScriptProfiler helps you track where your code is slow It provides: Code lines t

Lucas BLP 3 Jun 02, 2022
Transfer SemanticKITTI labeles into other dataset/sensor formats.

LiDAR-Transfer Transfer SemanticKITTI labeles into other dataset/sensor formats. Content Convert datasets (NUSCENES, FORD, NCLT) to KITTI format Minim

Photogrammetry & Robotics Bonn 64 Nov 21, 2022
Deep Latent Force Models

Deep Latent Force Models This repository contains a PyTorch implementation of the deep latent force model (DLFM), presented in the paper, Compositiona

Tom McDonald 5 Oct 26, 2022
implementation of the paper "MarginGAN: Adversarial Training in Semi-Supervised Learning"

MarginGAN This repository is the implementation of the paper "MarginGAN: Adversarial Training in Semi-Supervised Learning". 1."preliminary" is the imp

Van 7 Dec 23, 2022
Stochastic Tensor Optimization for Robot Motion - A GPU Robot Motion Toolkit

STORM Stochastic Tensor Optimization for Robot Motion - A GPU Robot Motion Toolkit [Install Instructions] [Paper] [Website] This package contains code

NVIDIA Research Projects 101 Dec 12, 2022
PyArmadillo: an alternative approach to linear algebra in Python

PyArmadillo is a linear algebra library for the Python language, with an emphasis on ease of use.

Terry Zhuo 58 Oct 11, 2022
A ssl analyzer which could analyzer target domain's certificate.

ssl_analyzer A ssl analyzer which could analyzer target domain's certificate. Analyze the domain name ssl certificate information according to the inp

vincent 17 Dec 12, 2022
Companion repository to the paper accepted at the 4th ACM SIGSPATIAL International Workshop on Advances in Resilient and Intelligent Cities

Transfer learning approach to bicycle sharing systems station location planning using OpenStreetMap Companion repository to the paper accepted at the

Politechnika Wrocławska - repozytorium dla informatyków 4 Oct 24, 2022
SwinIR: Image Restoration Using Swin Transformer

SwinIR: Image Restoration Using Swin Transformer This repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Win

Jingyun Liang 2.4k Jan 08, 2023
code for paper -- "Seamless Satellite-image Synthesis"

Seamless Satellite-image Synthesis by Jialin Zhu and Tom Kelly. Project site. The code of our models borrows heavily from the BicycleGAN repository an

Light 14 Apr 05, 2022
The pytorch implementation of SOKD (BMVC2021).

Semi-Online Knowledge Distillation Implementations of SOKD. Requirements This repo was tested with Python 3.8, PyTorch 1.5.1, torchvision 0.6.1, CUDA

4 Dec 19, 2021
A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

FedML-AI 175 Dec 01, 2022