Synthetic data for the people.

Overview

zpy: Synthetic data in Blender.

WebsiteInstallDocsExamplesCLIContributeLicence

Discord Twitter Youtube PyPI Docs

Synthetic raspberry pi

Abstract

Collecting, labeling, and cleaning data for computer vision is a pain. Jump into the future and create your own data instead! Synthetic data is faster to develop with, effectively infinite, and gives you full control to prevent bias and privacy issues from creeping in. We created zpy to make synthetic data easy, by simplifying the scene creation process and providing an easy way to generate synthetic data at scale.

Install

Install: Using Blender GUI

First download the latest zip (you want the one called zpy_addon-v*.zip). Then open up Blender. Navigate to Edit -> Preferences -> Add-ons. You should be able to install and enable the addon from there.

Enabling the addon

Install: Linux: Using Install Script

$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/ZumoLabs/zpy/main/install.sh)"

Set these environment variables for specific versions:

export BLENDER_VERSION="2.91"
export BLENDER_VERSION_FULL="2.91.0"
export ZPY_VERSION="v1.0.0"

Documentation

More documentation can be found here

Examples

Tutorials

Projects

CLI

We provide a simple CLI, you can find documentation here.

Contributing

We welcome community contributions! Search through the current issues or open your own.

Licence

This release of zpy is under the GPLv3 license, a free copyleft license used by Blender. TLDR: Its free, use it!

BibTeX

If you use zpy in your research, we would appreciate the citation!

@article{zpy,
  title={zpy: Synthetic data for Blender.},
  author={Ponte, H. and Ponte, N. and Karatas, K},
  journal={GitHub. Note: https://github.com/ZumoLabs/zpy},
  volume={1},
  year={2020}
}
Comments
  • Improving Rendering & Processing Speed with AWS

    Improving Rendering & Processing Speed with AWS

    Is your feature request related to a problem? Please describe. I'm frustrated by the time it is taking to process images on my local device (the rendering isn't the worst but the annotations are taking a long time at the end). I would like to use AWS EC2 instances to reduce the time required to create images and annotations (especially the category segmentation which seems to take a long time to encode into the annotation).

    Do you have any suggestions as to the kinds of instances or methods on AWS can be used to reduce rendering and processing time? That would be immensely helpful. Thank you.

    question 
    opened by tgorm1965 14
  • Is it possible to segment parts of an object based on it's material?

    Is it possible to segment parts of an object based on it's material?

    Is your feature request related to a problem? Please describe. I would like to segment a complex mesh that is made up of different components with different materials for every component. Is it possible to segment based off the material? Or do i need to separate the single mesh into multiple components?

    question 
    opened by tgorm1965 13
  • Bounding Box generation Error

    Bounding Box generation Error

    I've created this Blender scene where I have a dice, and I'm using ZPY to generate a dataset composed of images obtained by rotating around the object and jittering both the dice position and the camera. Everything seems to be working properly, but the bounding-boxes generated on the annotation file get progressively worse with each picture.

    For example this is the first image's bounding-box: image

    This one we get halfway through: image

    And this is one of the last ones: image

    This is my code (I've cut some stuff, I can't paste it all for some reason):

    
    def run(num_steps = 20):
        
        # Random seed results in unique behavior
        zpy.blender.set_seed()
    
        # Create the saver object
        saver = zpy.saver_image.ImageSaver(description="Domain randomized dado")
    
        # Add the dado category
        dado_seg_color = zpy.color.random_color(output_style="frgb")
        saver.add_category(name="dado", color=dado_seg_color)
    
        # Segment Suzzanne (make sure a material exists for the object!)
        zpy.objects.segment("dado", color=dado_seg_color)
        
        # Original dice pose
        zpy.objects.save_pose("dado", "dado_pose_og")
        
        #Original camera pose
        zpy.objects.save_pose("Camera", "Camera_pose_og")
    
        # Save the positions of objects so we can jitter them later
        zpy.objects.save_pose("Camera", "cam_pose")
        zpy.objects.save_pose("dado", "dado_pose")
        
        asset_dir = Path(bpy.data.filepath).parent
        texture_paths = [
            asset_dir / Path("textures/madera.png"),
            asset_dir / Path("textures/ladrillo.png"),
        ]
        
    
        # Run the sim.
        for step_idx in range(num_steps):
            # Example logging
            # stp = zpy.blender.step()
            # print("BLENDER STEPS: ", stp.num_steps)
            # log.debug("This is a debug log")
    
            # Return camera and dado to original positions
            zpy.objects.restore_pose("Camera", "cam_pose")
            zpy.objects.restore_pose("dado", "dado_pose")
            
            # Rotate camera
            location = bpy.context.scene.objects["Camera"].location
            angle = step_idx*360/num_steps
            location = rotate(location, angle, axis=(0, 0, 1))
            bpy.data.objects["Camera"].location = location
            
            # Jitter dado pose
            zpy.objects.jitter(
                "dado",
                translate_range=((-300, 300), (-300, 300), (0, 0)),
                rotate_range=(
                    (0, 0),
                    (0, 0),
                    (-math.pi, math.pi),
                ),
            )
    
            # Jitter the camera pose
            zpy.objects.jitter(
                "Camera",
                translate_range=(
                    (-5, 5),
                    (-5, 5),
                    (-5, 5),
                ),
            )
    
            # Camera should be looking at dado
            zpy.camera.look_at("Camera", bpy.data.objects["dado"].location)
            
            texture_path = random.choice(texture_paths)
            
            # HDRIs are like a pre-made background with lighting
            # zpy.hdris.random_hdri()
    
            # Pick a random texture from the 'textures' folder (relative to blendfile)
            # Textures are images that we will map onto a material
            new_mat = zpy.material.make_mat_from_texture(texture_path)
            # zpy.material.set_mat("dado", new_mat)
    
            # Have to segment the new material
            zpy.objects.segment("dado", color=dado_seg_color)
    
            # Jitter the dado material
            # zpy.material.jitter(bpy.data.objects["dado"].active_material)
    
            # Jitter the HSV for empty and full images
            '''
            hsv = (
                random.uniform(0.49, 0.51),  # (hue)
                random.uniform(0.95, 1.1),  # (saturation)
                random.uniform(0.75, 1.2),  # (value)
            )
            '''
    
            # Name for each of the output images
            rgb_image_name = zpy.files.make_rgb_image_name(step_idx)
            iseg_image_name = zpy.files.make_iseg_image_name(step_idx)
            depth_image_name = zpy.files.make_depth_image_name(step_idx)
    
            # Render image
            zpy.render.render(
                rgb_path=saver.output_dir / rgb_image_name,
                iseg_path=saver.output_dir / iseg_image_name,
                depth_path=saver.output_dir / depth_image_name,
                # hsv=hsv,
            )
    
            # Add images to saver
            saver.add_image(
                name=rgb_image_name,
                style="default",
                output_path=saver.output_dir / rgb_image_name,
                frame=step_idx,
            )
            saver.add_image(
                name=iseg_image_name,
                style="segmentation",
                output_path=saver.output_dir / iseg_image_name,
                frame=step_idx,
            )
            saver.add_image(
                name=depth_image_name,
                style="depth",
                output_path=saver.output_dir / depth_image_name,
                frame=step_idx,
            )
    
            # Add annotation to segmentation image
            saver.add_annotation(
                image=rgb_image_name,
                seg_image=iseg_image_name,
                seg_color=dado_seg_color,
                category="dado",
            )
            
    
        # Write out annotations
        saver.output_annotated_images()
        saver.output_meta_analysis()
    
        # ZUMO Annotations
        zpy.output_zumo.OutputZUMO(saver).output_annotations()
    
        # COCO Annotations
        zpy.output_coco.OutputCOCO(saver).output_annotations()
        
        # Volver al estado inicial
        zpy.objects.restore_pose("dado", "dado_pose_og")
        zpy.objects.restore_pose("Camera", "Camera_pose_og")
    

    Is this my fault or an actual bug?

    • OS: Ubuntu 20.04
    • Python 3.9
    • Blender 2.93
    • zpy: latest
    bug 
    opened by franferraz98 7
  • Better Run button

    Better Run button

    To properly run the "run" script, a zpy user has to click the "Run" button under the ZPY panel. If they click the "Run" button in the Blender script panel, it will not save and reset the scene, resulting in simulation drift. We should provide a warning, or simply make the "run" button in the blender script also a valid option.

    bug enhancement 
    opened by hu-po 7
  • Question: Is there a method to obtain the height and width of every pixel of a depth map?

    Question: Is there a method to obtain the height and width of every pixel of a depth map?

    Hi, I would like to obtain this information to accompany the depth maps produced by ZPY. Is this possible using ZPY's implementation? How would I go about this? Thank you!

    question 
    opened by tgorm1965 6
  • Code Formatting

    Code Formatting

    Ran automatic code formatting with the following git pre-commit hooks:

    • black
    • trailing-whitespace-fixer
    • end-of-file-fixer
    • add-trailing-comma

    Added a pre-commit config file in case any contributors would like to use these pre-commit git hooks going forward.

    opened by zkneupper 6
  • Cannot create COCO annotations when RGB and Segmentation images are in Separate folder

    Cannot create COCO annotations when RGB and Segmentation images are in Separate folder

    Hi , I am trying to generate data with annotations, where my output_dir is the parent folder and I save my RGB and ISEG images in the children folders for example data/rgb_images and data/iseg_images. I also have another folder for annotations as data/coco_annotations. I am successfully able to render images however, I cannot generate COCO annotations. As far as I understand from the source code, it parses image path w.r.t. the saver.output_dir + filename instead of the directories where images are actually stored.

    Thanks!!!

    opened by aadi-mishra 4
  • Allow Custom scaling for HDRI Background

    Allow Custom scaling for HDRI Background

    Hi, This is a small feature I'd like to be added to "zpy.hdris.random_hdri()" function. Currently, the function doesn't accept a custom scale from user and renders the HDRI background in the default scale w.r.t the 3D object.

    It would helpful if a parameter can be added to random_hdri() so that it can be passed to load_hdri(). This would allow us to play with the HDRI background scaling.

    hdri enhancement 
    opened by Resh-97 4
  • Give multiple objects the same tag

    Give multiple objects the same tag

    Describe the solution you'd like Not sure if this already exists but I'd like to be able to give numerous object meshes have one label. If it does, please let me know!

    question 
    opened by yashdutt20 4
  • AOV Pass Error

    AOV Pass Error

    Describe the bug A key error is occurring in zpy. Can you help me figure out how to fix this?

    To Reproduce Steps to reproduce the behavior: I have followed the Suzanne script from the first video of the youtube channel.

    Expected behavior To save and render the images.

    Screenshots File "/Applications/Blender.app/Contents/Resources/2.91/python/lib/python3.7/site-packages/zpy/render.py", line 42, in make_aov_pass view_layer['aovs'][-1]['name'] = style KeyError: 'bpy_struct[key]: key "aovs" not found' In call to configurable 'make_aov_pass' (<function make_aov_pass at 0x1576194d0>) In call to configurable 'make_aov_file_output_node' (<function make_aov_file_output_node at 0x15761db00>) In call to configurable 'render_aov' (<function render_aov at 0x157623560>) Error: Python script failed, check the message in the system console Executing script failed with exception Error: Python script failed, check the message in the system console

    Desktop (please complete the following information):

    • OS: Mac OS Catalina 10.15.5
    • latest zpy
    opened by yashdutt20 3
  • KeyError: 'bpy_struct[key]: key

    KeyError: 'bpy_struct[key]: key "aovs" not found'

    Describe the bug I downloaded Suzanne 1 example from your repository and run it using Blender 2.91 and your library.

    I got following error:

    Traceback (most recent call last):
      File "/run.py", line 78, in <module>
      File "/run.py", line 35, in run
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1069, in gin_wrapper
        utils.augment_exception_message_and_reraise(e, err_str)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
        raise proxy.with_traceback(exception.__traceback__) from None
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1046, in gin_wrapper
        return fn(*new_args, **new_kwargs)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/zpy/render.py", line 204, in render_aov
        output_node = make_aov_file_output_node(style=style)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1069, in gin_wrapper
        utils.augment_exception_message_and_reraise(e, err_str)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
        raise proxy.with_traceback(exception.__traceback__) from None
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1046, in gin_wrapper
        return fn(*new_args, **new_kwargs)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/zpy/render.py", line 82, in make_aov_file_output_node
        zpy.render.make_aov_pass(style)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1069, in gin_wrapper
        utils.augment_exception_message_and_reraise(e, err_str)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
        raise proxy.with_traceback(exception.__traceback__) from None
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1046, in gin_wrapper
        return fn(*new_args, **new_kwargs)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/zpy/render.py", line 42, in make_aov_pass
        view_layer['aovs'][-1]['name'] = style
    KeyError: 'bpy_struct[key]: key "aovs" not found'
      In call to configurable 'make_aov_pass' (<function make_aov_pass at 0x7f73bfd25f80>)
      In call to configurable 'make_aov_file_output_node' (<function make_aov_file_output_node at 0x7f73bfd25ef0>)
      In call to configurable 'render_aov' (<function render_aov at 0x7f73bfd28f80>)
    Error: Python script failed, check the message in the system console
    

    To Reproduce Steps to reproduce the behavior:

    1. Install Blender 2.91.
    2. Install zpy addon and zpy-zumo Python library.
    3. Download Suzanne 1 run.py script.
    4. Run Suzanne 1 run.py script.

    Expected behavior Suzanne 1 run.py script run without errors.

    bug 
    opened by VictorAtPL 3
  • Material jitter

    Material jitter

    Hi

    How would I got about jittering just one object material, whilst leaving others constant ??

    So for example- say I've got a car object. Car tyres are always matt black, and the taillights are always red- but the paintwork varies- might be gloss red or matt blue.

    Is this to do with the way that materials are assigned in Blender? i.e tyres would be fixed as black & paintwork would be inherited or determined by ZPY

    thank you

    Andrew

    opened by barney2074 0
  • HDRI & texture randomisation

    HDRI & texture randomisation

    Hi,

    thank you for ZPY- loving it & just what I need- thank you..!

    I've worked through the tutorials. Suzanne 3 YouTube content seems to differ from the latest Suzanne 3 python run.py on GitHub. Youtube has a Python list of explicit texture and HDRI paths, whereas Github seems to have moved it into a function ??

    To get it to work- from the documentation I though I just needed to create a \textures directory and a \hdris\1k directory in the same folder as the .blend file ??

    so: path/foo.blend path/textures/1.jpg path/textures/2.jpg path/hdris/1k/a.exr path/hdris/1k/b.exr

    However- I get a bunch of errors- looks like this is not correct I would be very grateful if you could point me in the right direction thanks again

    Andrew

    opened by barney2074 1
  • How to get the Normalised segmentation?

    How to get the Normalised segmentation?

    How to get the normalised segmentation?

    I want to generate the segmentation_float with the zpy.output_coco.OutputCOCO(saver).output_annotations().

    When I changed the code, it doesn't return any segmentation, just return the bbox.

    The reason I want to normalize these is I am using 4K image size for rendering and I am getting a large annotation files. For example for 5 images, the annotation file was 17Mb.

            {
                "category_id": 0,
                "image_id": 0,
                "id": 0,
                "iscrowd": false,
                "bbox": [
                    311.01,
                    239.01,
                    16.980000000000018,
                    240.98000000000002
                ],
                "area": 4091.8404000000046
            },
            {
                "category_id": 1,
                "image_id": 0,
                "id": 1,
                "iscrowd": false,
                "bbox": [
                    279.01,
                    223.01,
                    79.98000000000002,
                    120.98000000000002
                ],
                "area": 9675.980400000004
            },
    
    opened by c3210927 0
Releases(v1.4.1rc9)
Owner
Zumo Labs
The world is a simulation.
Zumo Labs
基于pytorch_rnn的古诗词生成

pytorch_peot_rnn 基于pytorch_rnn的古诗词生成 说明 config.py里面含有训练、测试、预测的参数,更改后运行: python main.py 预测结果 if config.do_predict: result = trainer.generate('丽日照残春')

西西嘛呦 3 May 26, 2022
EasyTransfer is designed to make the development of transfer learning in NLP applications easier.

EasyTransfer is designed to make the development of transfer learning in NLP applications easier. The literature has witnessed the success of applying

Alibaba 819 Jan 03, 2023
ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.

ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.

Antlr Project 13.6k Jan 05, 2023
Beautiful visualizations of how language differs among document types.

Scattertext 0.1.0.0 A tool for finding distinguishing terms in corpora and displaying them in an interactive HTML scatter plot. Points corresponding t

Jason S. Kessler 2k Dec 27, 2022
Korean Simple Contrastive Learning of Sentence Embeddings using SKT KoBERT and kakaobrain KorNLU dataset

KoSimCSE Korean Simple Contrastive Learning of Sentence Embeddings implementation using pytorch SimCSE Installation git clone https://github.com/BM-K/

34 Nov 24, 2022
A collection of scripts to preprocess ASR datasets and finetune language-specific Wav2Vec2 XLSR models

wav2vec-toolkit A collection of scripts to preprocess ASR datasets and finetune language-specific Wav2Vec2 XLSR models This repository accompanies the

Anton Lozhkov 29 Oct 23, 2022
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"

Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation This repository is the pytorch implementation of our paper: Hierarchical Cr

44 Jan 06, 2023
Weakly-supervised Text Classification Based on Keyword Graph

Weakly-supervised Text Classification Based on Keyword Graph How to run? Download data Our dataset follows previous works. For long texts, we follow C

Hello_World 20 Dec 29, 2022
End-to-End Speech Processing Toolkit

ESPnet: end-to-end speech processing toolkit system/pytorch ver. 1.0.1 1.1.0 1.2.0 1.3.1 1.4.0 1.5.1 1.6.0 1.7.1 1.8.1 ubuntu18/python3.8/pip ubuntu18

ESPnet 5.9k Jan 03, 2023
Yet Another Compiler Visualizer

yacv: Yet Another Compiler Visualizer yacv is a tool for visualizing various aspects of typical LL(1) and LR parsers. Check out demo on YouTube to see

Ashutosh Sathe 129 Dec 17, 2022
Visual Automata is a Python 3 library built as a wrapper for Caleb Evans' Automata library to add more visualization features.

Visual Automata Copyright 2021 Lewi Lie Uberg Released under the MIT license Visual Automata is a Python 3 library built as a wrapper for Caleb Evans'

Lewi Uberg 55 Nov 17, 2022
Some embedding layer implementation using ivy library

ivy-manual-embeddings Some embedding layer implementation using ivy library. Just for fun. It is based on NYCTaxiFare dataset from kaggle (cut down to

Ishtiaq Hussain 2 Feb 10, 2022
Maha is a text processing library specially developed to deal with Arabic text.

An Arabic text processing library intended for use in NLP applications Maha is a text processing library specially developed to deal with Arabic text.

Mohammad Al-Fetyani 184 Nov 27, 2022
SciBERT is a BERT model trained on scientific text.

SciBERT is a BERT model trained on scientific text.

AI2 1.2k Dec 24, 2022
Deploying a Text Summarization NLP use case on Docker Container Utilizing Nvidia GPU

GPU Docker NLP Application Deployment Deploying a Text Summarization NLP use case on Docker Container Utilizing Nvidia GPU, to setup the enviroment on

Ritesh Yadav 9 Oct 14, 2022
Text editor on python tkinter to convert english text to other languages with the help of ployglot.

Transliterator Text Editor This is a simple transliteration program which is used to convert english word to phonetically matching word in another lan

Merin Rose Tom 1 Jan 16, 2022
Beyond Paragraphs: NLP for Long Sequences

Beyond Paragraphs: NLP for Long Sequences

AI2 338 Dec 02, 2022
OCR을 이용하여 인원수를 인식 후 줌을 Kill 해줍니다

How To Use killtheZoom-2.0 Windows 0. https://joyhong.tistory.com/79 이 글을 보면서 tesseract를 C:\Program Files\Tesseract-OCR 경로로 설치해주세요(한국어 언어 추가 필요) 상단의 초

김정인 9 Sep 13, 2021
An extensive UI tool built using new data scraped from BBC News

BBC-News-Analyzer An extensive UI tool built using new data scraped from BBC New

Antoreep Jana 1 Dec 31, 2021
NVDA, the free and open source Screen Reader for Microsoft Windows

NVDA NVDA (NonVisual Desktop Access) is a free, open source screen reader for Microsoft Windows. It is developed by NV Access in collaboration with a

NV Access 1.6k Jan 07, 2023