Rendering Point Clouds with Compute Shaders

Overview

Compute Shader Based Point Cloud Rendering

This repository contains the source code to our techreport:
Rendering Point Clouds with Compute Shaders and Vertex Order Optimization
Markus Schütz, Bernhard Kerbl, Michael Wimmer. (not peer-reviewed, currently in submission)

  • Compute shaders can render point clouds up to an order of magnitude faster than GL_POINTS.
  • With a combination of warp-wide deduplication and early-z, compute shaders able to render 796 million points (12.7GB) at stable 62 to 64 frames per second in various different viewpoints on an RTX 3090. This corresponds to a memory bandwidth utilization of about 802GB/s, or a throughput of about 50 billion points per second.
  • The vertex order also strongly affects the performance. Some locality of points that are consecutive in memory is beneficial, but excessive locality can result in drastic slowdowns if it leads to thousands of GPU threads attempting to update a single pixel. As such, neither Morton ordered nor shuffled buffers are optimal. However combining both by first sorting by Morton code, and then shuffling batches of 128 points but leaving points within a batch in order, results in an improved ordering that ensures high performance with our compute approaches, and it also increases the performance of GL_POINTS by up to 5 times.

About the Framework

This framework is written in C++ and JavaScript (using V8). Most of the rendering is done in JavaScript with bindings to OpenGL 4.5 functions. It is written with live-coding in mind, so many javascript files are immediately executed at runtime as soon as they are saved by any text editor. As such, code has to be written with repeated execution in mind.

Getting Started

  • Compile Skye.sln project with Visual Studio.
  • Open the workspace in vscode.
  • Open "load_pointcloud.js" (quick search files via ctrl + e).
    • Adapt the path to the correct location of the las file.
    • Adapt position and lookAt to a viewpoint that fits your point cloud.
    • Change window.x to something that fits your monitor setup, e.g., 0 if you've got a single monitor, or 2540 if you've got two monitors and your first one has a with of 2540 pixels.
  • Press "Ctrl + Shift + B" to start the app. You should be seing an empty green window. (window.x is not yet applied)
  • Once you save "load_pointcloud.js" via ctrl+s, it will be executed, the window will be repositioned, and the point cloud will be loaded.
  • You can change position and lookAt at runtime and apply them by simply saving load_pointcloud.js again. The pointcloud will not be loaded again - to do so, you'll need to restart first.

After loading the point cloud, you should be seeing something like the screenshot below. The framework includes an IMGUI window with frame times, and another window that lets you switch between various rendering methods. Best try with data sets with tens of millions or hundreds of millions of points!

sd

Code Sections

Code for the individual rendering methods is primarily found in the modules/compute_<methods> folders.

Method Location
atomicMin ./modules/compute
reduce ./modules/compute_ballot
early-z ./modules/compute_earlyDepth
reduce & early-z ./modules/compute_ballot_earlyDepth
dedup ./modules/compute_ballot_earlyDepth_dedup
HQS ./modules/compute_hqs
HQS1R ./modules/compute_hqs_1x64bit_fast
busy-loop ./modules/compute_guenther
just-set ./modules/compute_just_set

Results

Frame times when rendering 796 million points on an RTX 3090 in a close-up viewpoint. Times in milliseconds, lower is better. The compute methods reduce (with early-z) and dedup (with early-z) yield the best results with Morton order (<16.6ms, >60fps). The shuffled Morton order greatly improves performance of GL_POINTS and some compute methods, and it is usually either the fastest or within close margins of the fastest combinations of rendering method and ordering.

Not depicted is that the dedup method is the most stable approach that continuously maintains >60fps in all viewpoints, while the performance of the reduce method varies and may drop to 50fps in some viewpoints. As such, we would recomend to use dedup in conjunction with Morton order if the necessary compute features are available, and reduce (with early-z) for wider support.

Comments
  • Can provide some dataset to test the demo (like default Candi Banyunibo data set)?

    Can provide some dataset to test the demo (like default Candi Banyunibo data set)?

    Hi, just found this paper & project via graphics weekly news.. just compiled the demo, and seems uses that dataset by default (banyunibo_inside_morton).. anyway to obtain that dataset? if not can you provide some download link to some huge & equivalent data set used by the demo like retz,eclepens,etc.. are this datasets under non "open" licenses? just wanted to test performance on my Titan V compared to a 3090.. thanks..

    opened by oscarbg 11
  • invisible window

    invisible window

    If I compile in DEBUG (VS2019) it crashes in void V8Helper::setupGL in this line: setupV8GLExtBindings(tpl);

    If I compile in RELEASE it compiles and the console shows no error but the windows is invisible, I can only see the console (and if I click keys I see them in the log).

    I have a 1070

    opened by jagenjo 3
  • A question of render.cs

    A question of render.cs

    image in render.cs there's vec2 variable called imgPos, i don't know why it times 0.5 and plus 0.5, what does it mean? Thank you very much if you can answer my questions. : )

    opened by UMR19 2
  • Questions about point cloud display when zoom in

    Questions about point cloud display when zoom in

    Hi, Thanks for your excellent job, just i compiled the demo and modified the setting to load my point cloud, it loaded successfully and have a better performance. but when i roll the mouse wheel to zoom in to look at the detail, lots of point missed, but when i use potree to display my point cloud, it display perfect. I am a beginner in computer graphics,may be it's point size is too small? image In the potree image

    Thanks for you reply:)

    opened by UMR19 0
  • ssRG and ssBA not bound to resolve?

    ssRG and ssBA not bound to resolve?

    In https://github.com/m-schuetz/compute_rasterizer/blob/master/compute_hqs/render.js

    Both buffers are bound in the render attribute pass, but neither is explicitly bound in the resolve pass. Always assumed that bindBufferBase affects the currently bound shader, but apparently the binding caries over to the next shader? Just a reminder to look this up to clarify my understanding of how they work.

    gl.bindBufferBase(gl.SHADER_STORAGE_BUFFER, 3, ssRG);
    gl.bindBufferBase(gl.SHADER_STORAGE_BUFFER, 4, ssBA);
    
    opened by m-schuetz 0
Releases(build_laz_crf)
Owner
Markus Schütz
Markus Schütz
Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”

Official implementation for TransDA Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”. Overview: Result: Prerequisites:

stanley 54 Dec 22, 2022
Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite.

tflite2tensorflow Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite. 1. Supported Layers No. TFLite Layer TF

Katsuya Hyodo 214 Dec 29, 2022
A tensorflow model that predicts if the image is of a cat or of a dog.

Quick intro Hello and thank you for your interest in my project! This is the backend part of a two-repo application. The other part can be found here

Tudor Matei 0 Mar 08, 2022
torchsummaryDynamic: support real FLOPs calculation of dynamic network or user-custom PyTorch ops

torchsummaryDynamic Improved tool of torchsummaryX. torchsummaryDynamic support real FLOPs calculation of dynamic network or user-custom PyTorch ops.

Bohong Chen 1 Jan 07, 2022
Compositional Sketch Search

Compositional Sketch Search Official repository for ICIP 2021 Paper: Compositional Sketch Search Requirements Install and activate conda environment c

Alexander Black 8 Sep 06, 2021
Phylogeny Partners

Phylogeny-Partners Two states models Instalation You may need to install the cython, networkx, numpy, scipy package: pip install cython, networkx, num

1 Sep 19, 2022
Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources.

Illumination_Decomposition Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources. This code implements the

QAY 7 Nov 15, 2020
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
Self-Supervised Speech Pre-training and Representation Learning Toolkit.

What's New Sep 2021: We host a challenge in AAAI workshop: The 2nd Self-supervised Learning for Audio and Speech Processing! See SUPERB official site

s3prl 1.6k Jan 08, 2023
Erpnext app for make employee salary on payroll entry based on one or more project with percentage for all project equal 100 %

Project Payroll this app for make payroll for employee based on projects like project on 30 % and project 2 70 % as account dimension it makes genral

Ibrahim Morghim 8 Jan 02, 2023
PyTorch implementation of VAGAN: Visual Feature Attribution Using Wasserstein GANs

Prototypical Networks for Few shot Learning in PyTorch Simple alternative Implementation of Prototypical Networks for Few Shot Learning (paper, code)

Orobix 93 Aug 17, 2022
Predict halo masses from simulations via graph neural networks

HaloGraphNet Predict halo masses from simulations via Graph Neural Networks. Given a dark matter halo and its galaxies, creates a graph with informati

Pablo Villanueva Domingo 20 Nov 15, 2022
PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

Yonglong Tian 2.2k Jan 08, 2023
Job-Recommend-Competition - Vectorwise Interpretable Attentions for Multimodal Tabular Data

SiD - Simple Deep Model Vectorwise Interpretable Attentions for Multimodal Tabul

Jungwoo Park 40 Dec 22, 2022
Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021

Learning Generative Models of Textured 3D Meshes from Real-World Images This is the reference implementation of "Learning Generative Models of Texture

Dario Pavllo 115 Jan 07, 2023
This repo includes the CUB-GHA (Gaze-based Human Attention) dataset and code of the paper "Human Attention in Fine-grained Classification".

HA-in-Fine-Grained-Classification This repo includes the CUB-GHA (Gaze-based Human Attention) dataset and code of the paper "Human Attention in Fine-g

16 Oct 29, 2022
[TPDS'21] COSCO: Container Orchestration using Co-Simulation and Gradient Based Optimization for Fog Computing Environments

COSCO Framework COSCO is an AI based coupled-simulation and container orchestration framework for integrated Edge, Fog and Cloud Computing Environment

imperial-qore 39 Dec 25, 2022
PyTorch implementation for ACL 2021 paper "Maria: A Visual Experience Powered Conversational Agent".

Maria: A Visual Experience Powered Conversational Agent This repository is the Pytorch implementation of our paper "Maria: A Visual Experience Powered

Jokie 22 Dec 12, 2022
A Marvelous ChatBot implement using PyTorch.

PyTorch Marvelous ChatBot [Update] it's 2019 now, previously model can not catch up state-of-art now. So we just move towards the future a transformer

JinTian 223 Oct 18, 2022
alfred-py: A deep learning utility library for **human**

Alfred Alfred is command line tool for deep-learning usage. if you want split an video into image frames or combine frames into a single video, then a

JinTian 800 Jan 03, 2023