Official code for "Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes", CVPR2022

Overview

Python 3.6

[CVPR 2022] Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes

Dongkwon Jin, Wonhui Park, Seong-Gyun Jeong, Heeyeon Kwon, and Chang-Su Kim

overview

Official implementation for "Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes" [paper] [supp] [video].

We construct a new dataset called "SDLane". SDLane is available at here. Now, only test set is provided due to privacy issues. All dataset will be provided soon.

Video

Video

Related work

We wil also present another paper, "Eigencontours: Novel Contour Descriptors Based on Low-Rank Approximation", accepted to CVPR 2022 (oral) [github] [video].

Requirements

  • PyTorch >= 1.6
  • CUDA >= 10.0
  • CuDNN >= 7.6.5
  • python >= 3.6

Installation

  1. Download repository. We call this directory as ROOT:
$ git clone https://github.com/dongkwonjin/Eigenlanes.git
  1. Download pre-trained model parameters and preprocessed data in ROOT:
$ cd ROOT
$ unzip pretrained.zip
$ unzip preprocessed.zip
  1. Create conda environment:
$ conda create -n eigenlanes python=3.7 anaconda
$ conda activate eigenlanes
  1. Install dependencies:
$ conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
$ pip install -r requirements.txt

Directory structure

.                           # ROOT
├── Preprocessing           # directory for data preprocessing
│   ├── culane              # dataset name (culane, tusimple)
|   |   ├── P00             # preprocessing step 1
|   |   |   ├── code
|   |   ├── P01             # preprocessing step 2
|   |   |   ├── code
|   │   └── ...
│   └── ...                 # etc.
├── Modeling                # directory for modeling
│   ├── culane              # dataset name (culane, tusimple)
|   |   ├── code
│   ├── tusimple           
|   |   ├── code
│   └── ...                 # etc.
├── pretrained              # pretrained model parameters 
│   ├── culane              
│   ├── tusimple            
│   └── ...                 # etc.
├── preprocessed            # preprocessed data
│   ├── culane              # dataset name (culane, tusimple)
|   |   ├── P03             
|   |   |   ├── output
|   |   ├── P04             
|   |   |   ├── output
|   │   └── ...
│   └── ...
.

Evaluation (for CULane)

To test on CULane, you need to install official CULane evaluation tools. The official metric implementation is available here. Please downloads the tools into ROOT/Modeling/culane/code/evaluation/culane/. The tools require OpenCV C++. Please follow here to install OpenCV C++. Then, you compile the evaluation tools. We recommend to see an installation guideline

$ cd ROOT/Modeling/culane/code/evaluation/culane/
$ make

Train

  1. Set the dataset you want to train (DATASET_NAME)
  2. Parse your dataset path into the -dataset_dir argument.
  3. Edit config.py if you want to control the training process in detail
$ cd ROOT/Modeling/DATASET_NAME/code/
$ python main.py --run_mode train --pre_dir ROOT/preprocessed/DATASET_NAME/ --dataset_dir /where/is/your/dataset/path/ 

Test

  1. Set the dataset you want to test (DATASET_NAME)
  2. Parse your dataset path into the -dataset_dir argument.
  3. If you want to get the performances of our work,
$ cd ROOT/Modeling/DATASET_NAME/code/
$ python main.py --run_mode test_paper --pre_dir ROOT/preprocessed/DATASET_NAME/ --paper_weight_dir ROOT/pretrained/DATASET_NAME/ --dataset_dir /where/is/your/dataset/path/
  1. If you want to evaluate a model you trained,
$ cd ROOT/Modeling/DATASET_NAME/code/
$ python main.py --run_mode test --pre_dir ROOT/preprocessed/DATASET_NAME/ --dataset_dir /where/is/your/dataset/path/

Preprocessing

example

Data preprocessing is divided into five steps, which are P00, P01, P02, P03, and P04. Below we describe each step in detail.

  1. In P00, the type of ground-truth lanes in a dataset is converted to pickle format.
  2. In P01, each lane in a training set is represented by 2D points sampled uniformly in the vertical direction.
  3. In P02, lane matrix is constructed and SVD is performed. Then, each lane is transformed to its coefficient vector.
  4. In P03, clustering is performed to obtain lane candidates.
  5. In P04, training labels are generated to train the SI module in the proposed SIIC-Net.

If you want to get the preproessed data, please run the preprocessing codes in order. Also, you can download the preprocessed data.

$ cd ROOT/Preprocessing/DATASET_NAME/PXX_each_preprocessing_step/code/
$ python main.py --dataset_dir /where/is/your/dataset/path/

Reference

@Inproceedings{
    Jin2022eigenlanes,
    title={Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes},
    author={Jin, Dongkwon and Park, Wonhui and Jeong, Seong-Gyun and Kwon, Heeyeon and Kim, Chang-Su},
    booktitle={CVPR},
    year={2022}
}
Owner
Dongkwon Jin
BS: EE, Korea University Grad: EE, Korea University (Current)
Dongkwon Jin
🇰🇷 Text to Image in Korean

KoDALLE Utilizing pretrained language model’s token embedding layer and position embedding layer as DALLE’s text encoder. Background Training DALLE mo

HappyFace 74 Sep 22, 2022
Chatbot in 200 lines of code using TensorLayer

Seq2Seq Chatbot This is a 200 lines implementation of Twitter/Cornell-Movie Chatbot, please read the following references before you read the code: Pr

TensorLayer Community 820 Dec 17, 2022
Gauge equivariant mesh cnn

Geometric Mesh CNN The code in this repository is an implementation of the Gauge Equivariant Mesh CNN introduced in the paper Gauge Equivariant Mesh C

50 Dec 18, 2022
Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic Segmentation (CVPR 2022)

CCAM (Unsupervised) Code repository for our paper "CCAM: Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localizati

Computer Vision Insitute, SZU 113 Dec 27, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 05, 2022
Self-Supervised Learning with Kernel Dependence Maximization

Self-Supervised Learning with Kernel Dependence Maximization This is the code for SSL-HSIC, a self-supervised learning loss proposed in the paper Self

DeepMind 29 Dec 29, 2022
Implementation for "Exploiting Aliasing for Manga Restoration" (CVPR 2021)

[CVPR Paper](To appear) | [Project Website](To appear) | BibTex Introduction As a popular entertainment art form, manga enriches the line drawings det

133 Dec 15, 2022
Automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azure

fwhr-calc-website This project is to automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azur

SoohyunPark 1 Feb 07, 2022
A TensorFlow Implementation of "Deep Multi-Scale Video Prediction Beyond Mean Square Error" by Mathieu, Couprie & LeCun.

Adversarial Video Generation This project implements a generative adversarial network to predict future frames of video, as detailed in "Deep Multi-Sc

Matt Cooper 704 Nov 26, 2022
Pytorch implementation for DFN: Distributed Feedback Network for Single-Image Deraining.

DFN:Distributed Feedback Network for Single-Image Deraining Abstract Recently, deep convolutional neural networks have achieved great success for sing

6 Nov 05, 2022
PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021]

piglet PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021] This repo contains code and data for PIGLeT. If you like

Rowan Zellers 51 Oct 08, 2022
AWS provides a Python SDK, "Boto3" ,which can be used to access the AWS-account from the local.

Boto3 - The AWS SDK for Python Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to wri

Shreyas Srivastava 1 Oct 25, 2021
Implementation of TabTransformer, attention network for tabular data, in Pytorch

Tab Transformer Implementation of Tab Transformer, attention network for tabular data, in Pytorch. This simple architecture came within a hair's bread

Phil Wang 420 Jan 05, 2023
Apply Graph Self-Supervised Learning methods to graph-level task(TUDataset, MolculeNet Datset)

Graphlevel-SSL Overview Apply Graph Self-Supervised Learning methods to graph-level task(TUDataset, MolculeNet Dataset). It is unified framework to co

JunSeok 8 Oct 15, 2021
Storchastic is a PyTorch library for stochastic gradient estimation in Deep Learning

Storchastic is a PyTorch library for stochastic gradient estimation in Deep Learning

Emile van Krieken 140 Dec 30, 2022
Code for "Long Range Probabilistic Forecasting in Time-Series using High Order Statistics"

Long Range Probabilistic Forecasting in Time-Series using High Order Statistics This is the code produced as part of the paper Long Range Probabilisti

16 Dec 06, 2022
A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

Yutian Liu 2 Jan 29, 2022
Attention-based CNN-LSTM and XGBoost hybrid model for stock prediction

Attention-based CNN-LSTM and XGBoost hybrid model for stock prediction Requirements The code has been tested running under Python 3.7.4, with the foll

zshicode 84 Jan 01, 2023
A disassembler for the RP2040 Programmable I/O State-machine!

piodisasm A disassembler for the RP2040 Programmable I/O State-machine! Usage Just run piodisasm.py on a file that contains the PIO code as hex! (Such

Ghidra Ninja 29 Dec 06, 2022
This repo holds codes of the ICCV21 paper: Visual Alignment Constraint for Continuous Sign Language Recognition.

VAC_CSLR This repo holds codes of the paper: Visual Alignment Constraint for Continuous Sign Language Recognition.(ICCV 2021) [paper] Prerequisites Th

Yuecong Min 64 Dec 19, 2022