pyntcloud is a Python library for working with 3D point clouds.

Overview

Making point clouds fun again

LGTM Code quality Github Actions C.I. Documentation Status Launch Binder

pyntcloud logo

pyntcloud is a Python 3 library for working with 3D point clouds leveraging the power of the Python scientific stack.

Installation

conda install pyntcloud -c conda-forge

Or:

pip install pyntcloud

Quick Overview

You can access most of pyntcloud's functionality from its core class: PyntCloud.

With PyntCloud you can perform complex 3D processing operations with minimum lines of code. For example you can:

  • Load a PLY point cloud from disk.
  • Add 3 new scalar fields by converting RGB to HSV.
  • Build a grid of voxels from the point cloud.
  • Build a new point cloud keeping only the nearest point to each occupied voxel center.
  • Save the new point cloud in numpy's NPZ format.

With the following concise code:

from pyntcloud import PyntCloud

cloud = PyntCloud.from_file("some_file.ply")

cloud.add_scalar_field("hsv")

voxelgrid_id = cloud.add_structure("voxelgrid", n_x=32, n_y=32, n_z=32)

new_cloud = cloud.get_sample("voxelgrid_nearest", voxelgrid_id=voxelgrid_id, as_PyntCloud=True)

new_cloud.to_file("out_file.npz")

Integration with other libraries

pyntcloud offers seamless integration with other 3D processing libraries.

You can create / convert PyntCloud instances from / to many 3D processing libraries using the from_instance / to_instance methods:

import open3d as o3d
from pyntcloud import PyntCloud

# FROM Open3D
original_triangle_mesh = o3d.io.read_triangle_mesh("diamond.ply")
cloud = PyntCloud.from_instance("open3d", original_triangle_mesh)

# TO Open3D
cloud = PyntCloud.from_file("diamond.ply")
converted_triangle_mesh = cloud.to_instance("open3d", mesh=True)  # mesh=True by default
import pyvista as pv
from pyntcloud import PyntCloud

# FROM PyVista
original_point_cloud = pv.read("diamond.ply")
cloud = PyntCloud.from_instance("pyvista", original_point_cloud)

# TO PyVista
cloud = PyntCloud.from_file("diamond.ply")
converted_triangle_mesh = cloud.to_instance("pyvista", mesh=True)
Comments
  • Package Missing in osx-64 Channels

    Package Missing in osx-64 Channels

    Hi,

    I am using Anaconda on an OSX-64 machine. While I was trying to create the pyntcloud environment with the enviroment.yml, it prompted the following error:

    NoPackagesFoundError: Package missing in current osx-64 channels: 
      - gst-plugins-base 1.8.0 0
    

    Do you know how to solve this?

    Thanks!

    opened by timzhang642 13
  • New scalar field: Normals

    New scalar field: Normals

    opened by daavoo 11
  • the method plot() shows a black iframe

    the method plot() shows a black iframe

    I seem to have a problem with the .plot() method below is a screeshot of what I get running the Basic_Numpy_Plotting example

    cattura

    I tried Chrome and Firefox hoping it was browser-related but still no luck, any ideas?

    Bug 
    opened by cicobalico 10
  • Bounding Box Filter

    Bounding Box Filter

    I've noticed that the bounding box filter does not work for the z axis. I've got it work on x and y but for some reason it will not apply to the z direction. Any thoughts on why this is happening or what im doing wrong?

    opened by threerivers3d-jc 9
  • Seeing plots in Jupyter using QuickStart file

    Seeing plots in Jupyter using QuickStart file

    Hola! Great job with this library! I'm just getting started using it and it's definitely going to be really useful for my work.

    I went through most of the issues and I see that there's been similar issues to what I'm getting with the .plot() function. I've tried those solutions but just haven't been able to figure it out yet.

    I'm running your QuickStart file, and when I get to the first scene.plot() all I see is a black screen with the word 'screenshot' in an orange box in the top left and the logo in the middle of the screen. I'm using a Mac and Chrome. I have no clue where I should be looking and any help would be appreciated! Thanks!

    opened by nandoLidar 8
  • Loading obj file doesn't work for objects which do not only consist of triangles

    Loading obj file doesn't work for objects which do not only consist of triangles

    When I attempt to load a obj file, which has e.g. rectangles as facet pyntcloud crashes with an AssertionError within pandas:

    ---------------------------------------------------------------------------
    AssertionError                            Traceback (most recent call last)
    <ipython-input-36-170af64711ff> in <module>()
    ----> 1 obj.read_obj("/Users/johannes/test.obj")
    
    ~/anaconda3/envs/suction/lib/python3.6/site-packages/pyntcloud/io/obj.py in read_obj(filename)
         50     f = [re.split(r'\D+', x) for x in f]
         51 
    ---> 52     mesh = pd.DataFrame(f, dtype='i4', columns=mesh_columns)
         53     # start index at 0
         54     mesh -= 1
    
    ~/anaconda3/envs/suction/lib/python3.6/site-packages/pandas/core/frame.py in __init__(self, data, index, columns, dtype, copy)
        367                     if is_named_tuple(data[0]) and columns is None:
        368                         columns = data[0]._fields
    --> 369                     arrays, columns = _to_arrays(data, columns, dtype=dtype)
        370                     columns = _ensure_index(columns)
        371 
    
    ~/anaconda3/envs/suction/lib/python3.6/site-packages/pandas/core/frame.py in _to_arrays(data,     columns, coerce_float, dtype)
       6282     if isinstance(data[0], (list, tuple)):
       6283         return _list_to_arrays(data, columns, coerce_float=coerce_float,
    -> 6284                                dtype=dtype)
       6285     elif isinstance(data[0], collections.Mapping):
       6286         return _list_of_dict_to_arrays(data, columns,
    
    ~/anaconda3/envs/suction/lib/python3.6/site-packages/pandas/core/frame.py in _list_to_arrays(data, columns, coerce_float, dtype)
       6361         content = list(lib.to_object_array(data).T)
       6362     return _convert_object_array(content, columns, dtype=dtype,
    -> 6363                                  coerce_float=coerce_float)
       6364 
       6365 
    
    ~/anaconda3/envs/suction/lib/python3.6/site-packages/pandas/core/frame.py in _convert_object_array(content, columns, coerce_float, dtype)
       6418             # caller's responsibility to check for this...
       6419             raise AssertionError('%d columns passed, passed data had %s '
    -> 6420                                  'columns' % (len(columns), len(content)))
       6421 
       6422     # provide soft conversion of object dtypes
    
    AssertionError: 9 columns passed, passed data had 12 columns
    
    Bug Debate 
    opened by hildensia 8
  • Stripped Down Version for RPi Use [Request]

    Stripped Down Version for RPi Use [Request]

    Hi, This looks like it'd be the best solution to my problem but for the fact it requires Numba, which, if it is possible to install on the RPi, is very difficult. Any chance of creating a stripped down version? I've been on the hunt for a good while for a way to tessellate a point cloud.

    Feature Request 
    opened by AcrimoniousMirth 8
  • Refresh plot

    Refresh plot

    Hi,

    Is it possible to refresh the point cloud plot? i.e. displaying a video of point clouds whilst still being able to interact in the browser and changing views? Alternatively is it possible to take PNGs rather than HTMLs ?

    Thanks!

    Feature Request 
    opened by fferroni 8
  • Issue with plot() in Jupyter Lab

    Issue with plot() in Jupyter Lab

    I know this must be frustrating or I might have done something very stupid but I am not seeing any error message and when I run the plot() function I see

    Renderer(camera=PerspectiveCamera(aspect=1.6, fov=90.0, position=(135.3456573486328, 9146.374328613281, 41812.…

    HBox(children=(Label(value='Background color:'), ColorPicker(value='black'), Label(value='Point size:'), Float…

    instead of the actual render. I tried staring a simple server too. I am working on Chrome for Ubuntu 16.04 and python 3.5.

    Bug 
    opened by ShivendraAgrawal 7
  • 2D Snapshot

    2D Snapshot

    Is there a way to get a single, fixed, 2D snapshot of the point-cloud? I'm having some trouble embedding the IFrame created by plot in Google's Collaboratory - localhost refused to connect. A static snapshot would help, even if it is not interactive.

    opened by dorodnic 7
  • could you help out converting it to .obj, please?

    could you help out converting it to .obj, please?

    https://storage.googleapis.com/nvidia-dev/113620.ply At my end I either get the points lost or the color lost with conversion `[GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information.

    from pyntcloud import PyntCloud diamond = PyntCloud.from_file("cloud.ply") convex_hull_id = diamond.add_structure("convex_hull") convex_hull = diamond.structures[convex_hull_id] diamond.to_file("diamond_hull.obj", also_save=["mesh"])`

    Feature Request Question 
    opened by AndreV84 6
  • kdtree radius search not working

    kdtree radius search not working

    Describe the bug When running radius search on my point cloud, I get an error:

    File "D:\repos\scripts\venv\lib\site-packages\pyntcloud\core_class.py", line 590, in get_neighbors return r_neighbors(kdtree, r) File "D:\repos\scripts\venv\lib\site-packages\pyntcloud\neighbors\r_neighbors.py", line 21, in r_neighbors return np.array(kdtree.query_ball_tree(kdtree, r)) ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (5342,) + inhomogeneous part.`

    k-neighbor search works as expected.

    To Reproduce My code to reproduce the bug:

    cloud = PyntCloud.from_file(pc_path) kdtree_id = cloud.add_structure("kdtree") r_neighbors = cloud.get_neighbors( r=5, kdtree=kdtree_id)

    Desktop (please complete the following information):

    • OS: Win 10
    • pyntcloud: 0.3.1
    • python 3.9
    opened by tholzmann 1
  • Add CodeQL workflow for GitHub code scanning

    Add CodeQL workflow for GitHub code scanning

    Hi daavoo/pyntcloud!

    This is a one-off automatically generated pull request from LGTM.com :robot:. You might have heard that we’ve integrated LGTM’s underlying CodeQL analysis engine natively into GitHub. The result is GitHub code scanning!

    With LGTM fully integrated into code scanning, we are focused on improving CodeQL within the native GitHub code scanning experience. In order to take advantage of current and future improvements to our analysis capabilities, we suggest you enable code scanning on your repository. Please take a look at our blog post for more information.

    This pull request enables code scanning by adding an auto-generated codeql.yml workflow file for GitHub Actions to your repository — take a look! We tested it before opening this pull request, so all should be working :heavy_check_mark:. In fact, you might already have seen some alerts appear on this pull request!

    Where needed and if possible, we’ve adjusted the configuration to the needs of your particular repository. But of course, you should feel free to tweak it further! Check this page for detailed documentation.

    Questions? Check out the FAQ below!

    FAQ

    Click here to expand the FAQ section

    How often will the code scanning analysis run?

    By default, code scanning will trigger a scan with the CodeQL engine on the following events:

    • On every pull request — to flag up potential security problems for you to investigate before merging a PR.
    • On every push to your default branch and other protected branches — this keeps the analysis results on your repository’s Security tab up to date.
    • Once a week at a fixed time — to make sure you benefit from the latest updated security analysis even when no code was committed or PRs were opened.

    What will this cost?

    Nothing! The CodeQL engine will run inside GitHub Actions, making use of your unlimited free compute minutes for public repositories.

    What types of problems does CodeQL find?

    The CodeQL engine that powers GitHub code scanning is the exact same engine that powers LGTM.com. The exact set of rules has been tweaked slightly, but you should see almost exactly the same types of alerts as you were used to on LGTM.com: we’ve enabled the security-and-quality query suite for you.

    How do I upgrade my CodeQL engine?

    No need! New versions of the CodeQL analysis are constantly deployed on GitHub.com; your repository will automatically benefit from the most recently released version.

    The analysis doesn’t seem to be working

    If you get an error in GitHub Actions that indicates that CodeQL wasn’t able to analyze your code, please follow the instructions here to debug the analysis.

    How do I disable LGTM.com?

    If you have LGTM’s automatic pull request analysis enabled, then you can follow these steps to disable the LGTM pull request analysis. You don’t actually need to remove your repository from LGTM.com; it will automatically be removed in the next few months as part of the deprecation of LGTM.com (more info here).

    Which source code hosting platforms does code scanning support?

    GitHub code scanning is deeply integrated within GitHub itself. If you’d like to scan source code that is hosted elsewhere, we suggest that you create a mirror of that code on GitHub.

    How do I know this PR is legitimate?

    This PR is filed by the official LGTM.com GitHub App, in line with the deprecation timeline that was announced on the official GitHub Blog. The proposed GitHub Action workflow uses the official open source GitHub CodeQL Action. If you have any other questions or concerns, please join the discussion here in the official GitHub community!

    I have another question / how do I get in touch?

    Please join the discussion here to ask further questions and send us suggestions!

    opened by lgtm-com[bot] 0
  • conda-script.py: error: unrecognized arguments: nltk - Jupiter Notebook

    conda-script.py: error: unrecognized arguments: nltk - Jupiter Notebook

    Hello everyone. I am trying to download nltk into my notebook but get an error message everytime.

    Here is what I input conda install -c conda-forge nltk

    Here is the error message I see everytime. Note: you may need to restart the kernel to use updated packages. usage: conda-script.py [-h] [-V] command ... conda-script.py: error: unrecognized arguments: nltk

    I have tried reseting the Kernal, closing the program and trying on a blank notebook but nothing seems to work. Let me know what I can do to fix this!!! Thanks

    opened by mac-1117 0
  • The logic of `io.las.read_las_with_laspy()` may not meet the las data specification.

    The logic of `io.las.read_las_with_laspy()` may not meet the las data specification.

    Hello. Thanks for the nice library! I think I may have found a bug, could you please check?

    Describe the bug The logic of io.las.read_las_with_laspy() may not meet the las data specification. https://github.com/daavoo/pyntcloud/blob/c9dcf59eacbec33de0279899a43fe73c5c094b09/pyntcloud/io/las.py#L46

    To Reproduce Steps to reproduce the behavior:

    • Download point cloud data (.las) of Kakegawa Castle.
      • https://www.geospatial.jp/ckan/dataset/kakegawacastle/resource/61d02b61-3a44-4a5b-b263-814c6aa23551
      • Please note that the data is 2GB in zip file and 5GB after unzipping.
      • The file name is in Japanese, so please be careful of the character encoding.
      • Kakegawa Castle is https://en.wikipedia.org/wiki/Kakegawa_Castle
    • Feel free to rename the file as you wish. Here, the file name is KakegawaCastle.las.
    • Execute the following code to get the xyz coordinates.
      • You will find 190 million points.
    from pyntcloud import PyntCloud
    cloud = PyntCloud.from_file(". /KakegawaCastle.las")
    cloud.points
    # x y z intensity bit_fields raw_classification scan_angle_rank user_data point_source_id red green blue
    # 0 37.910053 71.114777 28.936932 513 0 1 0 0 29 138 122 127
    # 1 37.690052 75.975777 28.918930 2309 0 1 0 0 29 15 5 14
    # 2 38.465054 71.277779 33.523930 64149 0 1 0 0 29 44 15 35
    # 3 32.406052 78.586777 30.808931 19758 0 1 0 0 29 99 54 59
    # 4 30.372051 86.346779 30.809931 257 0 1 0 0 29 107 56 55
    # ...	...	...	...	...	...	...	...	...	...	...	...	...
    # 192366074 151.807999 172.604996 17.660999 50886 0 1 0 0 29 198 198 190
    # 192366075 152.425003 173.162994 16.458000 25186 0 1 0 0 29 101 96 96
    # 192366076 152.126007 172.781998 16.620001 30840 0 1 0 0 29 121 120 116
    # 192366077 152.085007 172.682999 17.497000 40863 0 1 0 0 29 166 157 146
    # 192366078 151.832993 173.360001 16.886000 31868 0 1 0 0 29 132 121 115
    # 192366079 rows × 12 columns
    
    • At this time, the first point in column x is 37.910053
    • If you run the following command, the data should look like this.
      • pdal info: https://pdal.io/apps/info.html
    % pdal info /KakegawaCastle.las -p 0
    {
      "file_size": 5001518281,
      "filename": "KakegawaCastle.las",
      "now": "2022-06-14T09:39:43+0900",
      "pdal_version": "2.4.0 (git-version: Release)",
      "points":
      {
        "point":
        {
          "Blue": 32640,
          "Classification": 1,
          "EdgeOfFlightLine": 0,
          "Green": 31365,
          "Intensity": 513,
          "NumberOfReturns": 0,
          "PointId": 0,
          "PointSourceId": 29,
          "Red": 35445,
          "ReturnNumber": 0,
          "ScanAngleRank": 0,
          "ScanDirectionFlag": 0,
          "UserData": 0,
          "X": -44490.84295,
          "Y": -135781.1752,
          "Z": 54.58493098
        }
      }
      "reader": "readers.las"
    }
    
    • The value of x is -44490.84295, which is different from the value output by pyntcloud!
    • The above value can be calculated from the data output when using the following laspy.
    import laspy
    las = laspy.read(". /KakegawaCastle.las")
    header = las.header
    
    # first x point value: 531578298
    x_point = las.X[0]
    
    # x scale: 7.131602618438667e-08 -> 0.0000007
    x_scale = header.x_scale
    
    # x offset: -44528.753
    x_offset = header.x_offset
    
    # x_coordinate output from above variables: -44490.842948180776
    real_coordinate_x = (x_point * x_scale) + x_offset
    
    • The value calculated from laspy based on EPSG:6676 is indeed at Kakegawa Castle!
      • https://www.google.co.jp/maps/place/34%C2%B046'30.3%22N+138%C2%B000'50.1%22E/@34.775077,138.0117243,17z/data=!3m1!4b1!4m5!3m4!1s0x0:0x2ce21e9ef0b19341!8m2!3d34.775077!4d138.013913?hl=ja
    • But in read_las_with_laspy(), the logic is as follows, and the offset values are not added https://github.com/daavoo/pyntcloud/blob/c9dcf59eacbec33de0279899a43fe73c5c094b09/pyntcloud/io/las.py#L55

    Expected behavior Offset values are taken into account for the xyz coordinates of the DataFrame.

    Screenshots Does not exist.

    Desktop (please complete the following information):

    • OS: macOS Monterey v12.4
    • Browser: Does not used.
    • Version
    Conda -V
    conda 4.12.0
    ❯ conda list | grep pyntcloud
    pyntcloud 0.3.0 pyhd8ed1ab_0 conda-forge
    

    Additional context If the above context looks OK, shall I create a PullRequest?

    Bug 
    opened by nokonoko1203 3
Releases(v0.3.1)
  • v0.3.1(Jul 31, 2022)

    What's Changed

    • Use KDTree instead of cKDTree by @daavoo in https://github.com/daavoo/pyntcloud/pull/339
    • Make value take offsets by @nokonoko1203 in https://github.com/daavoo/pyntcloud/pull/335

    New Contributors

    • @nokonoko1203 made their first contribution in https://github.com/daavoo/pyntcloud/pull/335

    Full Changelog: https://github.com/daavoo/pyntcloud/compare/v0.3.0...v0.3.1

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(May 27, 2022)

    What's Changed

    • Upgrade the api to laspy 2.0 by @SBCV in https://github.com/daavoo/pyntcloud/pull/330

    Full Changelog: https://github.com/daavoo/pyntcloud/compare/v0.2.0...v0.3.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 15, 2022)

    What's Changed

    • fix python 10 ci by @fcakyon in https://github.com/daavoo/pyntcloud/pull/322
    • filters: kdtree: Remove prints by @daavoo in https://github.com/daavoo/pyntcloud/pull/325
    • PyVista: point_arrays -> point_data by @banesullivan in https://github.com/daavoo/pyntcloud/pull/327

    New Contributors

    • @fcakyon made their first contribution in https://github.com/daavoo/pyntcloud/pull/322

    Full Changelog: https://github.com/daavoo/pyntcloud/compare/v0.1.6...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.6(Jan 12, 2022)

    What's Changed

    • use raw string literal to address DeprecationWarning by @robin-wayve in https://github.com/daavoo/pyntcloud/pull/316
    • Add bool dtype support for PLY files by @Nicholas-Mitchell in https://github.com/daavoo/pyntcloud/pull/321

    New Contributors

    • @robin-wayve made their first contribution in https://github.com/daavoo/pyntcloud/pull/316

    Full Changelog: https://github.com/daavoo/pyntcloud/compare/v0.1.5...v0.1.6

    Source code(tar.gz)
    Source code(zip)
  • v0.1.5(Aug 14, 2021)

  • v0.1.4(Feb 17, 2021)

  • v0.1.3(Oct 13, 2020)

    Features

    • Add pythreejs voxelgrid plotting and voxel colors #280
    • Add support for laz files (#288)
    • Added support for pylas (#291)

    Bugfixes

    • Fix pyvista integration

    Other

    Minor updates to docs and code maintenance

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Oct 7, 2019)

  • 0.1.0(Oct 4, 2019)

    Features

    • PyntCloud new methods: from_instance and to_instance for integration with other 3D processing libraries.

    PyVista and Open3D are currently supported.

    • Add GitHub actions workflow
    • Include License file in Manifest.in

    Bugfixes

    • Fix tests that were not being run in C.I.
    • Fix test_geometry failing tests
    Source code(tar.gz)
    Source code(zip)
  • v0.0.2(Sep 29, 2019)

  • v0.0.1(Jul 14, 2019)

Owner
David de la Iglesia Castro
Passionate about learning.
David de la Iglesia Castro
Detect textlines in document images

Textline Detection Detect textlines in document images Introduction This tool performs border, region and textline detection from document image data

QURATOR-SPK 70 Jun 30, 2022
第一届西安交通大学人工智能实践大赛(2018AI实践大赛--图片文字识别)第一名;仅采用densenet识别图中文字

OCR 第一届西安交通大学人工智能实践大赛(2018AI实践大赛--图片文字识别)冠军 模型结果 该比赛计算每一个条目的f1score,取所有条目的平均,具体计算方式在这里。这里的计算方式不对一句话里的相同文字重复计算,故f1score比提交的最终结果低: - train val f1score 0

尹畅 441 Dec 22, 2022
The CIS OCR PostCorrectionTool

The CIS OCR Post Correction Tool PoCoTo Source code for the Java-based PoCoTo client enabling fast interactive batch corrections of complete OCR error

CIS OCR Group 36 Dec 15, 2022
PianoVisuals - Create background videos synced with piano music using opencv

Steps Record piano video Use Neural Network to do body segmentation (video matti

Solbiati Alessandro 4 Jan 24, 2022
This is the implementation of the paper "Gated Recurrent Convolution Neural Network for OCR"

Gated Recurrent Convolution Neural Network for OCR This project is an implementation of the GRCNN for OCR. For details, please refer to the paper: htt

90 Dec 22, 2022
Face Recognizer using Opencv Python

Face Recognizer using Opencv Python The first step create your own dataset with file open-cv-create_dataset second step You can put the photo accordin

Han Izza 2 Nov 16, 2021
Bu uygulamada Python ve Opencv kullanarak bilgisayar kamerasından yüz tespiti yapıyoruz.

opencv_yuz_bulma Bu uygulamada Python ve Opencv kullanarak bilgisayar kamerasından yüz tespiti yapıyoruz. Bilgisarın kendi kamerasını kullanmak için;

Ahmet Haydar Ornek 6 Apr 16, 2022
~1000 book pages + OpenCV + python = page regions identified as paragraphs, lines, images, captions, etc.

cosc428-structor I had an open-ended Computer Vision assignment to complete, and an out-of-copyright book that I wanted to turn into an ebook. Convent

Chad Oliver 45 Dec 06, 2022
An OCR evaluation tool

dinglehopper dinglehopper is an OCR evaluation tool and reads ALTO, PAGE and text files. It compares a ground truth (GT) document page with a OCR resu

QURATOR-SPK 40 Dec 20, 2022
This is a passport scanning web service to help you scan, identify and validate your passport created with a simple and flexible design and ready to be integrated right into your system!

Passport-Recogniton-System This is a passport scanning web service to help you scan, identify and validate your passport created with a simple and fle

Mo'men Ashraf Muhamed 7 Jan 04, 2023
scene-linear test images

Scene-Referred Image Collection A collection of OpenEXR Scene-Referred images, encoded as max 2048px width, DWAA 80 compression. All exrs are encoded

Gralk Klorggson 7 Aug 25, 2022
Virtual Zoom Gesture using OpenCV

Virtual_Zoom_Gesture I have created a virtual zoom gesture where we can Zoom in and Zoom out any image and even we can move that image anywhere on the

Mudit Sinha 2 Dec 26, 2021
Python library to extract tabular data from images and scanned PDFs

Overview ExtractTable - API to extract tabular data from images and scanned PDFs The motivation is to make it easy for developers to extract tabular d

Org. Account 165 Dec 31, 2022
PyQT5 app that colorize black & white pictures using CNN(use pre-trained model which was made with OpenCV)

About PyQT5 app that colorize black & white pictures using CNN(use pre-trained model which was made with OpenCV) Colorizor Приложение для проекта Yand

1 Apr 04, 2022
Creating of virtual elements of the graphical interface using opencv and mediapipe.

Virtual GUI Creating of virtual elements of the graphical interface using opencv and mediapipe. Element GUI Output Description Button By default the b

Aleksei 4 Jun 16, 2022
color detection using python

colordetection color detection using python In this color detection Python project, we are going to build an application through which you can automat

Ruchith Kumar 1 Nov 04, 2021
EQFace: An implementation of EQFace: A Simple Explicit Quality Network for Face Recognition

EQFace: A Simple Explicit Quality Network for Face Recognition The first face recognition network that generates explicit face quality online.

DeepCam Shenzhen 141 Dec 31, 2022
document image degradation

ocrodeg The ocrodeg package is a small Python library implementing document image degradation for data augmentation for handwriting recognition and OC

NVIDIA Research Projects 134 Nov 18, 2022
An unofficial implementation of the paper "AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss".

AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss This is an unofficial implementation of AutoVC based on the official one. The reposi

Chien-yu Huang 27 Jun 16, 2022
The Open Source Framework for Machine Vision

SimpleCV Quick Links: About Installation [Docker] (#docker) Ubuntu Virtual Environment Arch Linux Fedora MacOS Windows Raspberry Pi SimpleCV Shell Vid

Sight Machine 2.6k Dec 31, 2022