Synthetic data for the people.

Overview

zpy: Synthetic data in Blender.

WebsiteInstallDocsExamplesCLIContributeLicence

Discord Twitter Youtube PyPI Docs

Synthetic raspberry pi

Abstract

Collecting, labeling, and cleaning data for computer vision is a pain. Jump into the future and create your own data instead! Synthetic data is faster to develop with, effectively infinite, and gives you full control to prevent bias and privacy issues from creeping in. We created zpy to make synthetic data easy, by simplifying the scene creation process and providing an easy way to generate synthetic data at scale.

Install

Install: Using Blender GUI

First download the latest zip (you want the one called zpy_addon-v*.zip). Then open up Blender. Navigate to Edit -> Preferences -> Add-ons. You should be able to install and enable the addon from there.

Enabling the addon

Install: Linux: Using Install Script

$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/ZumoLabs/zpy/main/install.sh)"

Set these environment variables for specific versions:

export BLENDER_VERSION="2.91"
export BLENDER_VERSION_FULL="2.91.0"
export ZPY_VERSION="v1.0.0"

Documentation

More documentation can be found here

Examples

Tutorials

Projects

CLI

We provide a simple CLI, you can find documentation here.

Contributing

We welcome community contributions! Search through the current issues or open your own.

Licence

This release of zpy is under the GPLv3 license, a free copyleft license used by Blender. TLDR: Its free, use it!

BibTeX

If you use zpy in your research, we would appreciate the citation!

@article{zpy,
  title={zpy: Synthetic data for Blender.},
  author={Ponte, H. and Ponte, N. and Karatas, K},
  journal={GitHub. Note: https://github.com/ZumoLabs/zpy},
  volume={1},
  year={2020}
}
Comments
  • Improving Rendering & Processing Speed with AWS

    Improving Rendering & Processing Speed with AWS

    Is your feature request related to a problem? Please describe. I'm frustrated by the time it is taking to process images on my local device (the rendering isn't the worst but the annotations are taking a long time at the end). I would like to use AWS EC2 instances to reduce the time required to create images and annotations (especially the category segmentation which seems to take a long time to encode into the annotation).

    Do you have any suggestions as to the kinds of instances or methods on AWS can be used to reduce rendering and processing time? That would be immensely helpful. Thank you.

    question 
    opened by tgorm1965 14
  • Is it possible to segment parts of an object based on it's material?

    Is it possible to segment parts of an object based on it's material?

    Is your feature request related to a problem? Please describe. I would like to segment a complex mesh that is made up of different components with different materials for every component. Is it possible to segment based off the material? Or do i need to separate the single mesh into multiple components?

    question 
    opened by tgorm1965 13
  • Bounding Box generation Error

    Bounding Box generation Error

    I've created this Blender scene where I have a dice, and I'm using ZPY to generate a dataset composed of images obtained by rotating around the object and jittering both the dice position and the camera. Everything seems to be working properly, but the bounding-boxes generated on the annotation file get progressively worse with each picture.

    For example this is the first image's bounding-box: image

    This one we get halfway through: image

    And this is one of the last ones: image

    This is my code (I've cut some stuff, I can't paste it all for some reason):

    
    def run(num_steps = 20):
        
        # Random seed results in unique behavior
        zpy.blender.set_seed()
    
        # Create the saver object
        saver = zpy.saver_image.ImageSaver(description="Domain randomized dado")
    
        # Add the dado category
        dado_seg_color = zpy.color.random_color(output_style="frgb")
        saver.add_category(name="dado", color=dado_seg_color)
    
        # Segment Suzzanne (make sure a material exists for the object!)
        zpy.objects.segment("dado", color=dado_seg_color)
        
        # Original dice pose
        zpy.objects.save_pose("dado", "dado_pose_og")
        
        #Original camera pose
        zpy.objects.save_pose("Camera", "Camera_pose_og")
    
        # Save the positions of objects so we can jitter them later
        zpy.objects.save_pose("Camera", "cam_pose")
        zpy.objects.save_pose("dado", "dado_pose")
        
        asset_dir = Path(bpy.data.filepath).parent
        texture_paths = [
            asset_dir / Path("textures/madera.png"),
            asset_dir / Path("textures/ladrillo.png"),
        ]
        
    
        # Run the sim.
        for step_idx in range(num_steps):
            # Example logging
            # stp = zpy.blender.step()
            # print("BLENDER STEPS: ", stp.num_steps)
            # log.debug("This is a debug log")
    
            # Return camera and dado to original positions
            zpy.objects.restore_pose("Camera", "cam_pose")
            zpy.objects.restore_pose("dado", "dado_pose")
            
            # Rotate camera
            location = bpy.context.scene.objects["Camera"].location
            angle = step_idx*360/num_steps
            location = rotate(location, angle, axis=(0, 0, 1))
            bpy.data.objects["Camera"].location = location
            
            # Jitter dado pose
            zpy.objects.jitter(
                "dado",
                translate_range=((-300, 300), (-300, 300), (0, 0)),
                rotate_range=(
                    (0, 0),
                    (0, 0),
                    (-math.pi, math.pi),
                ),
            )
    
            # Jitter the camera pose
            zpy.objects.jitter(
                "Camera",
                translate_range=(
                    (-5, 5),
                    (-5, 5),
                    (-5, 5),
                ),
            )
    
            # Camera should be looking at dado
            zpy.camera.look_at("Camera", bpy.data.objects["dado"].location)
            
            texture_path = random.choice(texture_paths)
            
            # HDRIs are like a pre-made background with lighting
            # zpy.hdris.random_hdri()
    
            # Pick a random texture from the 'textures' folder (relative to blendfile)
            # Textures are images that we will map onto a material
            new_mat = zpy.material.make_mat_from_texture(texture_path)
            # zpy.material.set_mat("dado", new_mat)
    
            # Have to segment the new material
            zpy.objects.segment("dado", color=dado_seg_color)
    
            # Jitter the dado material
            # zpy.material.jitter(bpy.data.objects["dado"].active_material)
    
            # Jitter the HSV for empty and full images
            '''
            hsv = (
                random.uniform(0.49, 0.51),  # (hue)
                random.uniform(0.95, 1.1),  # (saturation)
                random.uniform(0.75, 1.2),  # (value)
            )
            '''
    
            # Name for each of the output images
            rgb_image_name = zpy.files.make_rgb_image_name(step_idx)
            iseg_image_name = zpy.files.make_iseg_image_name(step_idx)
            depth_image_name = zpy.files.make_depth_image_name(step_idx)
    
            # Render image
            zpy.render.render(
                rgb_path=saver.output_dir / rgb_image_name,
                iseg_path=saver.output_dir / iseg_image_name,
                depth_path=saver.output_dir / depth_image_name,
                # hsv=hsv,
            )
    
            # Add images to saver
            saver.add_image(
                name=rgb_image_name,
                style="default",
                output_path=saver.output_dir / rgb_image_name,
                frame=step_idx,
            )
            saver.add_image(
                name=iseg_image_name,
                style="segmentation",
                output_path=saver.output_dir / iseg_image_name,
                frame=step_idx,
            )
            saver.add_image(
                name=depth_image_name,
                style="depth",
                output_path=saver.output_dir / depth_image_name,
                frame=step_idx,
            )
    
            # Add annotation to segmentation image
            saver.add_annotation(
                image=rgb_image_name,
                seg_image=iseg_image_name,
                seg_color=dado_seg_color,
                category="dado",
            )
            
    
        # Write out annotations
        saver.output_annotated_images()
        saver.output_meta_analysis()
    
        # ZUMO Annotations
        zpy.output_zumo.OutputZUMO(saver).output_annotations()
    
        # COCO Annotations
        zpy.output_coco.OutputCOCO(saver).output_annotations()
        
        # Volver al estado inicial
        zpy.objects.restore_pose("dado", "dado_pose_og")
        zpy.objects.restore_pose("Camera", "Camera_pose_og")
    

    Is this my fault or an actual bug?

    • OS: Ubuntu 20.04
    • Python 3.9
    • Blender 2.93
    • zpy: latest
    bug 
    opened by franferraz98 7
  • Better Run button

    Better Run button

    To properly run the "run" script, a zpy user has to click the "Run" button under the ZPY panel. If they click the "Run" button in the Blender script panel, it will not save and reset the scene, resulting in simulation drift. We should provide a warning, or simply make the "run" button in the blender script also a valid option.

    bug enhancement 
    opened by hu-po 7
  • Question: Is there a method to obtain the height and width of every pixel of a depth map?

    Question: Is there a method to obtain the height and width of every pixel of a depth map?

    Hi, I would like to obtain this information to accompany the depth maps produced by ZPY. Is this possible using ZPY's implementation? How would I go about this? Thank you!

    question 
    opened by tgorm1965 6
  • Code Formatting

    Code Formatting

    Ran automatic code formatting with the following git pre-commit hooks:

    • black
    • trailing-whitespace-fixer
    • end-of-file-fixer
    • add-trailing-comma

    Added a pre-commit config file in case any contributors would like to use these pre-commit git hooks going forward.

    opened by zkneupper 6
  • Cannot create COCO annotations when RGB and Segmentation images are in Separate folder

    Cannot create COCO annotations when RGB and Segmentation images are in Separate folder

    Hi , I am trying to generate data with annotations, where my output_dir is the parent folder and I save my RGB and ISEG images in the children folders for example data/rgb_images and data/iseg_images. I also have another folder for annotations as data/coco_annotations. I am successfully able to render images however, I cannot generate COCO annotations. As far as I understand from the source code, it parses image path w.r.t. the saver.output_dir + filename instead of the directories where images are actually stored.

    Thanks!!!

    opened by aadi-mishra 4
  • Allow Custom scaling for HDRI Background

    Allow Custom scaling for HDRI Background

    Hi, This is a small feature I'd like to be added to "zpy.hdris.random_hdri()" function. Currently, the function doesn't accept a custom scale from user and renders the HDRI background in the default scale w.r.t the 3D object.

    It would helpful if a parameter can be added to random_hdri() so that it can be passed to load_hdri(). This would allow us to play with the HDRI background scaling.

    hdri enhancement 
    opened by Resh-97 4
  • Give multiple objects the same tag

    Give multiple objects the same tag

    Describe the solution you'd like Not sure if this already exists but I'd like to be able to give numerous object meshes have one label. If it does, please let me know!

    question 
    opened by yashdutt20 4
  • AOV Pass Error

    AOV Pass Error

    Describe the bug A key error is occurring in zpy. Can you help me figure out how to fix this?

    To Reproduce Steps to reproduce the behavior: I have followed the Suzanne script from the first video of the youtube channel.

    Expected behavior To save and render the images.

    Screenshots File "/Applications/Blender.app/Contents/Resources/2.91/python/lib/python3.7/site-packages/zpy/render.py", line 42, in make_aov_pass view_layer['aovs'][-1]['name'] = style KeyError: 'bpy_struct[key]: key "aovs" not found' In call to configurable 'make_aov_pass' (<function make_aov_pass at 0x1576194d0>) In call to configurable 'make_aov_file_output_node' (<function make_aov_file_output_node at 0x15761db00>) In call to configurable 'render_aov' (<function render_aov at 0x157623560>) Error: Python script failed, check the message in the system console Executing script failed with exception Error: Python script failed, check the message in the system console

    Desktop (please complete the following information):

    • OS: Mac OS Catalina 10.15.5
    • latest zpy
    opened by yashdutt20 3
  • KeyError: 'bpy_struct[key]: key

    KeyError: 'bpy_struct[key]: key "aovs" not found'

    Describe the bug I downloaded Suzanne 1 example from your repository and run it using Blender 2.91 and your library.

    I got following error:

    Traceback (most recent call last):
      File "/run.py", line 78, in <module>
      File "/run.py", line 35, in run
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1069, in gin_wrapper
        utils.augment_exception_message_and_reraise(e, err_str)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
        raise proxy.with_traceback(exception.__traceback__) from None
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1046, in gin_wrapper
        return fn(*new_args, **new_kwargs)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/zpy/render.py", line 204, in render_aov
        output_node = make_aov_file_output_node(style=style)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1069, in gin_wrapper
        utils.augment_exception_message_and_reraise(e, err_str)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
        raise proxy.with_traceback(exception.__traceback__) from None
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1046, in gin_wrapper
        return fn(*new_args, **new_kwargs)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/zpy/render.py", line 82, in make_aov_file_output_node
        zpy.render.make_aov_pass(style)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1069, in gin_wrapper
        utils.augment_exception_message_and_reraise(e, err_str)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
        raise proxy.with_traceback(exception.__traceback__) from None
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1046, in gin_wrapper
        return fn(*new_args, **new_kwargs)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/zpy/render.py", line 42, in make_aov_pass
        view_layer['aovs'][-1]['name'] = style
    KeyError: 'bpy_struct[key]: key "aovs" not found'
      In call to configurable 'make_aov_pass' (<function make_aov_pass at 0x7f73bfd25f80>)
      In call to configurable 'make_aov_file_output_node' (<function make_aov_file_output_node at 0x7f73bfd25ef0>)
      In call to configurable 'render_aov' (<function render_aov at 0x7f73bfd28f80>)
    Error: Python script failed, check the message in the system console
    

    To Reproduce Steps to reproduce the behavior:

    1. Install Blender 2.91.
    2. Install zpy addon and zpy-zumo Python library.
    3. Download Suzanne 1 run.py script.
    4. Run Suzanne 1 run.py script.

    Expected behavior Suzanne 1 run.py script run without errors.

    bug 
    opened by VictorAtPL 3
  • Material jitter

    Material jitter

    Hi

    How would I got about jittering just one object material, whilst leaving others constant ??

    So for example- say I've got a car object. Car tyres are always matt black, and the taillights are always red- but the paintwork varies- might be gloss red or matt blue.

    Is this to do with the way that materials are assigned in Blender? i.e tyres would be fixed as black & paintwork would be inherited or determined by ZPY

    thank you

    Andrew

    opened by barney2074 0
  • HDRI & texture randomisation

    HDRI & texture randomisation

    Hi,

    thank you for ZPY- loving it & just what I need- thank you..!

    I've worked through the tutorials. Suzanne 3 YouTube content seems to differ from the latest Suzanne 3 python run.py on GitHub. Youtube has a Python list of explicit texture and HDRI paths, whereas Github seems to have moved it into a function ??

    To get it to work- from the documentation I though I just needed to create a \textures directory and a \hdris\1k directory in the same folder as the .blend file ??

    so: path/foo.blend path/textures/1.jpg path/textures/2.jpg path/hdris/1k/a.exr path/hdris/1k/b.exr

    However- I get a bunch of errors- looks like this is not correct I would be very grateful if you could point me in the right direction thanks again

    Andrew

    opened by barney2074 1
  • How to get the Normalised segmentation?

    How to get the Normalised segmentation?

    How to get the normalised segmentation?

    I want to generate the segmentation_float with the zpy.output_coco.OutputCOCO(saver).output_annotations().

    When I changed the code, it doesn't return any segmentation, just return the bbox.

    The reason I want to normalize these is I am using 4K image size for rendering and I am getting a large annotation files. For example for 5 images, the annotation file was 17Mb.

            {
                "category_id": 0,
                "image_id": 0,
                "id": 0,
                "iscrowd": false,
                "bbox": [
                    311.01,
                    239.01,
                    16.980000000000018,
                    240.98000000000002
                ],
                "area": 4091.8404000000046
            },
            {
                "category_id": 1,
                "image_id": 0,
                "id": 1,
                "iscrowd": false,
                "bbox": [
                    279.01,
                    223.01,
                    79.98000000000002,
                    120.98000000000002
                ],
                "area": 9675.980400000004
            },
    
    opened by c3210927 0
Releases(v1.4.1rc9)
Owner
Zumo Labs
The world is a simulation.
Zumo Labs
A program that uses real statistics to choose the best times to bet on BloxFlip's crash gamemode

Bloxflip Smart Bet A program that uses real statistics to choose the best times to bet on BloxFlip's crash gamemode. https://bloxflip.com/crash. THIS

43 Jan 05, 2023
Prompt tuning toolkit for GPT-2 and GPT-Neo

mkultra mkultra is a prompt tuning toolkit for GPT-2 and GPT-Neo. Prompt tuning injects a string of 20-100 special tokens into the context in order to

61 Jan 01, 2023
Utilities for preprocessing text for deep learning with Keras

Note: This utility is really old and is no longer maintained. You should use keras.layers.TextVectorization instead of this. Utilities for pre-process

Hamel Husain 180 Dec 09, 2022
The SVO-Probes Dataset for Verb Understanding

The SVO-Probes Dataset for Verb Understanding This repository contains the SVO-Probes benchmark designed to probe for Subject, Verb, and Object unders

DeepMind 20 Nov 30, 2022
An extension for asreview implements a version of the tf-idf feature extractor that saves the matrix and the vocabulary.

Extension - matrix and vocabulary extractor for TF-IDF and Doc2Vec An extension for ASReview that adds a tf-idf extractor that saves the matrix and th

ASReview 4 Jun 17, 2022
Understanding the Difficulty of Training Transformers

Admin Understanding the Difficulty of Training Transformers Guided by our analyses, we propose Adaptive Model Initialization (Admin), which successful

Liyuan Liu 300 Dec 29, 2022
My Implementation for the paper EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks using Tensorflow

Easy Data Augmentation Implementation This repository contains my Implementation for the paper EDA: Easy Data Augmentation Techniques for Boosting Per

Aflah 9 Oct 31, 2022
Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning

GenSen Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning Sandeep Subramanian, Adam Trischler, Yoshua B

Maluuba Inc. 309 Oct 19, 2022
👑 spaCy building blocks and visualizers for Streamlit apps

spacy-streamlit: spaCy building blocks for Streamlit apps This package contains utilities for visualizing spaCy models and building interactive spaCy-

Explosion 620 Dec 29, 2022
This library is testing the ethics of language models by using natural adversarial texts.

prompt2slip This library is testing the ethics of language models by using natural adversarial texts. This tool allows for short and simple code and v

9 Dec 28, 2021
A relatively simple python program to generate one of those reddit text to speech videos dominating youtube.

Reddit text to speech generator A basic reddit tts video generator Current functionality Generate videos for subs based on comments,(askreddit) so rea

Aadvik 17 Dec 19, 2022
End-to-end MLOps pipeline of a BERT model for emotion classification.

image source EmoBERT-MLOps The goal of this repository is to build an end-to-end MLOps pipeline based on the MLOps course from Made with ML, but this

Dimitre Oliveira 4 Nov 06, 2022
CLIPfa: Connecting Farsi Text and Images

CLIPfa: Connecting Farsi Text and Images OpenAI released the paper Learning Transferable Visual Models From Natural Language Supervision in which they

Sajjad Ayoubi 66 Dec 14, 2022
运小筹公众号是致力于分享运筹优化(LP、MIP、NLP、随机规划、鲁棒优化)、凸优化、强化学习等研究领域的内容以及涉及到的算法的代码实现。

OlittleRer 运小筹公众号是致力于分享运筹优化(LP、MIP、NLP、随机规划、鲁棒优化)、凸优化、强化学习等研究领域的内容以及涉及到的算法的代码实现。编程语言和工具包括Java、Python、Matlab、CPLEX、Gurobi、SCIP 等。 关注我们: 运筹小公众号 有问题可以直接在

运小筹 151 Dec 30, 2022
TaCL: Improve BERT Pre-training with Token-aware Contrastive Learning

TaCL: Improve BERT Pre-training with Token-aware Contrastive Learning

Yixuan Su 26 Oct 17, 2022
Maha is a text processing library specially developed to deal with Arabic text.

An Arabic text processing library intended for use in NLP applications Maha is a text processing library specially developed to deal with Arabic text.

Mohammad Al-Fetyani 184 Nov 27, 2022
Sentiment Analysis Project using Count Vectorizer and TF-IDF Vectorizer

Sentiment Analysis Project This project contains two sentiment analysis programs for Hotel Reviews using a Hotel Reviews dataset from Datafiniti. The

Simran Farrukh 0 Mar 28, 2022
A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis

WaveGlow A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis Quick Start: Install requirements: pip install

Yuchao Zhang 204 Jul 14, 2022
Multispeaker & Emotional TTS based on Tacotron 2 and Waveglow

This Repository contains a sample code for Tacotron 2, WaveGlow with multi-speaker, emotion embeddings together with a script for data preprocessing.

Ivan Didur 106 Jan 01, 2023
Client library to download and publish models and other files on the huggingface.co hub

huggingface_hub Client library to download and publish models and other files on the huggingface.co hub Do you have an open source ML library? We're l

Hugging Face 644 Jan 01, 2023