Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth [Paper]

Related tags

Deep LearningGLPDepth
Overview

Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth [Paper]

PWC PWC

Downloads

  • [Downloads] Trained ckpt files for NYU Depth V2 and KITTI
  • [Downloads] Predicted depth maps png files for NYU Depth V2 and KITTI Eigen split test set

Requirements

Tested on

python==3.7.7
torch==1.6.0
h5py==3.6.0
scipy==1.7.3
opencv-python==4.5.5
mmcv==1.4.3
timm=0.5.4
albumentations=1.1.0
tensorboardX==2.4.1

You can install above package with

$ pip install -r requirements.txt

Inference and Evaluate

Dataset

NYU Depth V2
$ cd ./datasets
$ wget http://horatio.cs.nyu.edu/mit/silberman/nyu_depth_v2/nyu_depth_v2_labeled.mat
$ python ../code/utils/extract_official_train_test_set_from_mat.py nyu_depth_v2_labeled.mat splits.mat ./nyu_depth_v2/official_splits/
KITTI

Download annotated depth maps data set (14GB) from [link] into ./datasets/kitti/data_depth_annotated

$ cd ./datasets/kitti/data_depth_annotated/
$ unzip data_depth_annotated.zip

With above two instrtuctions, you can perform eval_with_pngs.py/test.py for NYU Depth V2 and eval_with_pngs for KITTI.

To fully perform experiments, please follow [BTS] repository to obtain full dataset for NYU Depth V2 and KITTI datasets.

Your dataset directory should be

root
- nyu_depth_v2
  - bathroom_0001
  - bathroom_0002
  - ...
  - official_splits
- kitti
  - data_depth_annotated
  - raw_data
  - val_selection_cropped

Evaluation

  • Evaluate with png images

    for NYU Depth V2

    $ python ./code/eval_with_pngs.py --dataset nyudepthv2 --pred_path ./best_nyu_preds/ --gt_path ./datasets/nyu_depth_v2/ --max_depth_eval 10.0 
    

    for KITTI

    $ python ./code/eval_with_pngs.py --dataset kitti --split eigen_benchmark --pred_path ./best_kitti_preds/ --gt_path ./datasets/kitti/ --max_depth_eval 80.0 --garg_crop
    
  • Evaluate with model (NYU Depth V2)

    Result images will be saved in ./args.result_dir/args.exp_name (default: ./results/test)

    • To evaluate only

      $ python ./code/test.py --dataset nyudepthv2 --data_path ./datasets/ --ckpt_dir 
             
               --do_evaluate  --max_depth 10.0 --max_depth_eval 10.0
      
             
    • To save pngs for eval_with_pngs

      $ python ./code/test.py --dataset nyudepthv2 --data_path ./datasets/ --ckpt_dir 
             
               --save_eval_pngs  --max_depth 10.0 --max_depth_eval 10.0
      
             
    • To save visualized depth maps

      $ python ./code/test.py --dataset nyudepthv2 --data_path ./datasets/ --ckpt_dir 
             
               --save_visualize  --max_depth 10.0 --max_depth_eval 10.0
      
             

    In case of kitti, modify arguments to --dataset kitti --max_depth 80.0 --max_depth_eval 80.0 and add --kitti_crop [garg_crop or eigen_crop]

Inference

  • Inference with image directory
    $ python ./code/test.py --dataset imagepath --data_path 
         
           --save_visualize
    
         

To-Do

  • Add inference
  • Add training codes
  • Add dockerHub link
  • Add colab

References

[1] From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation. [code]

[2] SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. [code]

Owner
KAIST, EE, PhD student
Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis Implementation

Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis Implementation This project attempted to implement the paper Putting NeRF on a

254 Dec 27, 2022
DUE: End-to-End Document Understanding Benchmark

This is the repository that provide tools to download data, reproduce the baseline results and evaluation. What can you achieve with this guide Based

21 Dec 29, 2022
Implementation of the paper "Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning"

Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning This is the implementation of the paper "Self-Promoted Prototype Refinement

Kai Zhu 78 Dec 02, 2022
DIR-GNN - Discovering Invariant Rationales for Graph Neural Networks

DIR-GNN "Discovering Invariant Rationales for Graph Neural Networks" (ICLR 2022)

Ying-Xin (Shirley) Wu 70 Nov 13, 2022
The code for our paper "AutoSF: Searching Scoring Functions for Knowledge Graph Embedding"

AutoSF The code for our paper "AutoSF: Searching Scoring Functions for Knowledge Graph Embedding" and this paper has been accepted by ICDE2020. News:

AutoML Research 64 Dec 17, 2022
CIFAR-10_train-test - training and testing codes for dataset CIFAR-10

CIFAR-10_train-test - training and testing codes for dataset CIFAR-10

Frederick Wang 3 Apr 26, 2022
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

170.1k Jan 05, 2023
Efficient 3D Backbone Network for Temporal Modeling

VoV3D is an efficient and effective 3D backbone network for temporal modeling implemented on top of PySlowFast. Diverse Temporal Aggregation and

102 Dec 06, 2022
Exploration-Exploitation Dilemma Solving Methods

Exploration-Exploitation Dilemma Solving Methods Medium article for this repo - HERE In ths repo I implemented two techniques for tackling mentioned t

Aman Mishra 6 Jan 25, 2022
PyTorch CZSL framework containing GQA, the open-world setting, and the CGE and CompCos methods.

Compositional Zero-Shot Learning This is the official PyTorch code of the CVPR 2021 works Learning Graph Embeddings for Compositional Zero-shot Learni

EML Tübingen 70 Dec 27, 2022
GRF: Learning a General Radiance Field for 3D Representation and Rendering

GRF: Learning a General Radiance Field for 3D Representation and Rendering [Paper] [Video] GRF: Learning a General Radiance Field for 3D Representatio

Alex Trevithick 243 Dec 29, 2022
Official Pytorch implementation of 'GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network' (NeurIPS 2020)

Official implementation of GOCor This is the official implementation of our paper : GOCor: Bringing Globally Optimized Correspondence Volumes into You

Prune Truong 71 Nov 18, 2022
This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization News: [2020/05/04] Added EGL rendering option for training data g

Shunsuke Saito 1.5k Jan 03, 2023
A simple Tensorflow based library for deep and/or denoising AutoEncoder.

libsdae - deep-Autoencoder & denoising autoencoder A simple Tensorflow based library for Deep autoencoder and denoising AE. Library follows sklearn st

Rajarshee Mitra 147 Nov 18, 2022
Husein pet projects in here!

project-suka-suka Husein pet projects in here! List of projects mysejahtera-density. Generate resolution points using meshgrid and request each points

HUSEIN ZOLKEPLI 47 Dec 09, 2022
Rainbow is all you need! A step-by-step tutorial from DQN to Rainbow

Do you want a RL agent nicely moving on Atari? Rainbow is all you need! This is a step-by-step tutorial from DQN to Rainbow. Every chapter contains bo

Jinwoo Park (Curt) 1.4k Dec 29, 2022
This repo holds code for TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation

TransUNet This repo holds code for TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation Usage

1.4k Jan 04, 2023
Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper

Ponder(ing) Transformer Implementation of a Transformer that learns to adapt the number of computational steps it takes depending on the difficulty of

Phil Wang 65 Oct 04, 2022
A PyTorch Lightning solution to training OpenAI's CLIP from scratch.

train-CLIP 📎 A PyTorch Lightning solution to training CLIP from scratch. Goal ⚽ Our aim is to create an easy to use Lightning implementation of OpenA

Cade Gordon 396 Dec 30, 2022
Implementation of FSGNN

FSGNN Implementation of FSGNN. For more details, please refer to our paper Experiments were conducted with following setup: Pytorch: 1.6.0 Python: 3.8

19 Dec 05, 2022