Honours project, on creating a depth estimation map from two stereo images of featureless regions

Overview

image-processing

This module generates depth maps for shape-blocked-out images

Install

If working with anaconda, then from the root directory:

conda env create --file environment.yml
conda activate image-processing

Otherwise, if python 3 is installed, pip can be used to ensure the required packages are available. From the root directory, run

pip install -r requirements.txt

Files

The core functional files are collection.py, image.py, shape.py, edge.py, segment.py. They each contain a class of the same name. They logically follow this order and encapsulate each other, so collection creates three image objects for the left, center and right images. Each image object creates a number of shape objects. Shape objects create edge objects. Edge objects create segment objects. helper.py contains assisting functions used by these various classes.

This design aids in splitting up all the information and processes necessary to perform the desired function and logically groups it to ease comprehension. Each ought to be well-commented enough to generally understand what each part is doing.

The only one intended to be accessed to retrieve depth maps is collection.py as it orchestrates the entire process.

Usage

Both main.py and auto_gen.py are designed to access collection and to have it create depth maps. They require the initial images to be stored within a directory in assets/ , and each with three further subdirectories, cameraLeft/, cameraCenter/, and cameraRight/ . They save their results to saves/ with the generated images being stored in saves/generated/ . All .img files are object-files generated during this process to reduce the workload needed the next time the same process is executed.

Main

main.py is for individual depth map generation. There are four arguments able to be passed to specify details to the execution.

  1. The directory name desired from within assets/ .
  2. The numerical index (starting at 0) of the specific image desired within the innermost subdirectories
  3. The number representing which image should the depth map visual be based on (0 for left, 1 for center, 2 for right)
  4. Should the resulting depth image be saved
  5. Should the resulting depth image be displayed

While it can take up to these four arguments, no arguments is also possible. Then, the directory within assets/ is randomly selected, as is the index of the image set, and which image is used to generate the depth map visual. It will save and display the results. Partial arguments is also fine, so long as order is maintained.

Example: To display on the left image but not save occluded_road's first image set

python main.py occluded_road 0 0 False True

note: the last argument, True, is redundant in this case
 
Example: Any road_no_occlusion image set, any image used to create the depth map visual (automatically will save and display)

python main.py road_no_occlusion

 

Example: Anything (automatically will save and display)

python main.py

 

The value of having it execute a certain image when its depth image has already been generated is that it will quickly pull it up in the viewer and unlike the static image one can view the individual pixel values the mouse hovers over in the top-right corner.

 

Auto_gen

Alternatively, auto_gen.py is intended for the automated creation of all depth map images.

python auto_gen.py

By simply executing it, it will determine the depth map image all image sets and save them all. The terminal output is saved to a txt file stored in saves/logs. It does not display the results, as that would greatly heed the process of creating all of the results.

Alternatively, it can take two arguments.

  1. Specifies a directory within assets/ to use rather than executing for all of them, similar to the first argument for main.py
  2. Specifies the image to be used as the basis for the depth map visual, similar to the third argument for main.py (0-2 for left, center, and right)

Example: All depth images

python auto_gen.py

 
Example: All Shape_based_stereoPairs depth images using the right image

python auto_gen.py Shape_based_stereoPairs 2

 

For both, if an existing depth map exists, it will not be redone even if the image expected to be used is different. To do so, remove both the .jpg and .img and re-run.

How it works

 

Initialization

Upon creation of an instance of collection, it first intantiates the left image's Image intance. The shape colours are determined and then each shape is instantiated. The bounding box of the given shape is determined as well as its left and right edges, and their segments.

Collection uses the colours determined by the left Image to speed up the other two image's instantiations.

After everything has been created, the segments of each edge, of each shape, in each image must be assigned. First this process requires determining the displacement of edges, which is then used to determine which shape owns and doesn't own which segment.

Generally at this stage all but a few stragglers are assigned. The remaining are due to shapes having few edges, and the only one it could own is shared with the ground or sky shape, and thus difficult to tell which owns it. Using additional information about the shapes ownership is assigned. Finally, it checks to see if any shapes are the ground or sky, as their depths are not calculated.

At this stage, the image objects are saved.

Depth calculation

Then, using this information about the edges of a shape, its depth can be more accurately calculated. Only edges it owns are used to determine its depth. So if it only has its right side, only the right edge is used. Alternatively if both are owned, the midpoint is used.

However, if the shape is determined to have a varying depth, then its depth can alternatively be calculated using the change of slope between the images.

Finally, once all depth values are found, a modified version of the original image is created with its shape colours replaced with their determined depth values, the sky is replaced with pure black, and the ground with pure white. This image is then possibly saved and possibly displayed. Which image is used to re-colour for the depth map depends on either a given argument or random selection.

Deep deconfounded recommender (Deep-Deconf) for paper "Deep causal reasoning for recommendations"

Deep Causal Reasoning for Recommender Systems The codes are associated with the following paper: Deep Causal Reasoning for Recommendations, Yaochen Zh

Yaochen Zhu 22 Oct 15, 2022
Program your own vulkan.gpuinfo.org query in Python. Used to determine baseline hardware for WebGPU.

query-gpuinfo-data License This software is not presently released under a license. The data in data/ is obtained under CC BY 4.0 as specified there.

Kai Ninomiya 5 Jul 18, 2022
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)

GraspNet Baseline Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020). [paper] [dataset] [API] [do

GraspNet 209 Dec 29, 2022
Algebraic effect handlers in Python

PyEffect: Algebraic effects in Python What IDK. Usage effects.handle(operation, handlers=None) effects.set_handler(effect, handler) Supported effects

Greg Werbin 5 Dec 27, 2021
KwaiRec: A Fully-observed Dataset for Recommender Systems (Density: Almost 100%)

KuaiRec: A Fully-observed Dataset for Recommender Systems (Density: Almost 100%) KuaiRec is a real-world dataset collected from the recommendation log

Chongming GAO (高崇铭) 70 Dec 28, 2022
(CVPR 2022) A minimalistic mapless end-to-end stack for joint perception, prediction, planning and control for self driving.

LAV Learning from All Vehicles Dian Chen, Philipp Krähenbühl CVPR 2022 (also arXiV 2203.11934) This repo contains code for paper Learning from all veh

Dian Chen 300 Dec 15, 2022
Repo for 2021 SDD assessment task 2, by Felix, Anna, and James.

SoftwareTask2 Repo for 2021 SDD assessment task 2, by Felix, Anna, and James. File/folder structure: helloworld.py - demonstrates various map backgrou

3 Dec 13, 2022
A small fun project using python OpenCV, mediapipe, and pydirectinput

Here I tried a small fun project using python OpenCV, mediapipe, and pydirectinput. Here we can control moves car game when yellow color come to right box (press key 'd') left box (press key 'a') lef

Sameh Elisha 3 Nov 17, 2022
This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports"

Introduction: X-Ray Report Generation This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports". O

no name 36 Dec 16, 2022
Implementation for Shape from Polarization for Complex Scenes in the Wild

sfp-wild Implementation for Shape from Polarization for Complex Scenes in the Wild project website | paper Code and dataset will be released soon. Int

Chenyang LEI 41 Dec 23, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

VITA 112 Nov 07, 2022
DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting

DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting Created by Yongming Rao*, Wenliang Zhao*, Guangyi Chen, Yansong Tang, Zheng Z

Yongming Rao 321 Dec 27, 2022
ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021 Dataset Code Demos Authors: He Zhang, Yuting Ye, Tak

HE ZHANG 194 Dec 06, 2022
Official code repository for the publication "Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons"

Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons This repository contains the code to repr

Computational Neuroscience, University of Bern 3 Aug 04, 2022
PyTorch implementation of Spiking Neural Networks trained on surrogate gradient & BPTT using snntorch.

snn-localization repo PyTorch implementation of Spiking Neural Networks trained on surrogate gradient & BPTT using snntorch. Install Dependencies Orig

Sami BARCHID 1 Jan 06, 2022
Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network

ild-cnn This is supplementary material for the manuscript: "Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neur

22 Nov 05, 2022
Pointer-generator - Code for the ACL 2017 paper Get To The Point: Summarization with Pointer-Generator Networks

Note: this code is no longer actively maintained. However, feel free to use the Issues section to discuss the code with other users. Some users have u

Abi See 2.1k Jan 04, 2023
an implementation of 3D Ken Burns Effect from a Single Image using PyTorch

3d-ken-burns This is a reference implementation of 3D Ken Burns Effect from a Single Image [1] using PyTorch. Given a single input image, it animates

Simon Niklaus 1.4k Dec 28, 2022
[ICRA2021] Reconstructing Interactive 3D Scene by Panoptic Mapping and CAD Model Alignment

Interactive Scene Reconstruction Project Page | Paper This repository contains the implementation of our ICRA2021 paper Reconstructing Interactive 3D

97 Dec 28, 2022
An introduction to bioimage analysis - http://bioimagebook.github.io

Introduction to Bioimage Analysis This book tries explain the main ideas of image analysis in a practical and engaging way. It's written primarily for

Bioimage Book 20 Nov 28, 2022