Notebook and code to synthesize complex and highly dimensional datasets using Gretel APIs.

Related tags

Deep Learningtrainer
Overview

Gretel Trainer

This code is designed to help users successfully train synthetic models on complex datasets with high row and column counts. The code works by intelligently dividing a dataset into a set of smaller datasets of correlated columns that can be parallelized and then joined together.

Get Started

Running the notebook

  1. Launch the Notebook in Google Colab or your preferred environment.
  2. Add your dataset and Gretel API key to the notebook.
  3. Generate synthetic data!

NOTE: Either delete the existing or choose a new cache file name if you are starting a dataset run from scratch.

TODOs / Roadmap

  • Enable additional sampling from from trained models.
  • Detect and label encode random UIDs (preprocessing).
Comments
  • Benchmark route Amplify models through Trainer

    Benchmark route Amplify models through Trainer

    Top level change

    Now that Trainer has a GretelAmplify model, Benchmark uses Trainer for Amplify runs instead of the SDK.

    Refactor

    I refactored Benchmark's Gretel models and executors with the goal of centralizing and thus making it simpler to understand:

    • which model types use Trainer (opt-in) vs. use the SDK
    • the "compatibility requirements" for different models (currently: LSTM <= 150 columns, GPTX == 1 column)

    These had been spread across a few different places (compare.py determined Trainer/SDK, gretel/sdk.py had GPTX compatibility, gretel/trainer.py had LSTM compatibility), but now it can all be found in gretel/models.py.

    At first glance it would seem compatibility requirements could be defined on specific model subclasses to make things more polymorphic. However, Benchmark's Gretel model classes are really just friendly wrappers around specific model configurations (from the blueprints repo) and do not represent all possible instances of that model type running through Benchmark. Instead, we instruct users subclass the generic GretelModel base class when they want to provide their own specific Gretel configuration. There are two reasons for this:

    1. It's a simpler instruction (always subclass this one thing)
    2. It enables us to include model types that are not yet "first class supported," such as DGAN (which we can't support in the same way we do models like Amplify/LSTM/etc. because DGAN's config includes required fields that are specifically coupled to the data source—there is no "one size fits all" blueprint).

    Small fixes

    • fix the model_slug value for Trainer's GretelACTGAN model
      • :warning: should this be changed to a list ["actgan", "ctgan"] for a little while for a smoother transition/deprecation experience??
    • zero-index custom model runs' run-identifier to match gretel model runs (which were themselves fixed to match project names here)
    opened by mikeknep 2
  • Lift gretel model compatibility to separate module

    Lift gretel model compatibility to separate module

    What's here

    Make it easier to find the "compatibility rules" for models by lifting the logic to its own module.

    Why not add this logic to the specific model classes? Wouldn't that be more polymorphic?

    The model classes (GretelLSTM, GretelCTGAN, etc.) are wrappers around specific configurations from the blueprints repo. They do not represent every possible configuration of that model type. If a user wants to run a customized LSTM config, for example, they subclass GretelModel, not GretelLSTM:

    class MyLstm(GretelModel):
        config = "/path/to/my_lstm.yml"
    

    Note: they could subclass GretelLSTM, but 1) it's easier to tell people to just subclass GretelModel regardless of model type, and/because 2) this ultimately treats the model configuration as the source of truth.

    If someone mistakenly created a custom Gretel model like this...

    class MyGptX(GretelGPTX):
        config = "/path/to/my_amplify.yml"
    

    ...Benchmark will treat this as an Amplify model, because basically all it does with the class instance is grab the config attribute (and the name—the results output will show the name as MyGptX.)

    opened by mikeknep 1
  • Lr/artifact manifest

    Lr/artifact manifest

    Added logic for config selection and updated dictionary key to access manifest per latest internal changes.

    Note that high-dimensionality-high-record is non-existent at the moment, as is the manifest endpoint :)

    Items yet to be addressed:

    • turn off partitions for non-LSTM models
    opened by lipikaramaswamy 1
  • Add param to pass custom base configuration

    Add param to pass custom base configuration

    • Prefer config if present, otherwise use the model_type's default config.
    • This does open the door a little wider to setting an invalid config that won't be known to be bad until attempting to train. That door was already slightly ajar in that one could use model_params to set keys to invalid values.
    • Not included here, but a thought: we could validate model_type earlier (even as the very first step of __init__) to fail fast, specifically before even creating a project.
    opened by mikeknep 1
  • Remove no-op elif case from runner

    Remove no-op elif case from runner

    Particularly given that we now have a third model (Amplify) supported in Trainer, we can remove this no-op elif clause so that the runner only has special logic for / awareness of LSTM (expand up in the diff for context).

    opened by mikeknep 0
  • Switch CTGAN usages to ACTGAN.

    Switch CTGAN usages to ACTGAN.

    ACTGAN is the successor of CTGAN.

    Note (1): this change is backward compatible, as all of the parameters that CTGAN supported are supported by ACTGAN as well.

    Note (2): any previously trained CTGAN models will be still usable, i.e. it will be possible to generate new records using old CTGAN models.

    opened by pimlock 0
  • Fix off-by-one difference between project name and run ID

    Fix off-by-one difference between project name and run ID

    Quick fix so that benchmark's internal run identifier lines up with the project name in Gretel Cloud. We'll eventually have a more user-friendly and stable interface to access detailed run information, but until we figure out how exactly we want that to look and do it, this should make things a little more friendly for those willing to dive into the internals: the models from project benchmark-{timestamp}-3 will correspond to comparison.results_dict["gretel-3"] (instead of "gretel-4")

    Note: I considered just using the full project name as the identifier instead of gretel-{index}, but we don't have an equivalent to project names for user custom model runs, so I figure the current [gretel|custom]-{index} approach is still best for now.

    opened by mikeknep 0
  • Configure session before starting Benchmark comparison

    Configure session before starting Benchmark comparison

    Current behavior

    When running in an environment where no Gretel credentials can be found (e.g. Colab), when Benchmark kicks off a comparison the background threads instantiating Trainer instances will prompt for an API key. This is problematic for multiple reasons, all (I believe) due to it running in multiple background threads: it prompts multiple times, doesn't accept input and/or cache properly, and ultimately crashes.

    This fix

    Benchmark itself now checks for a configured session before kicking off any real work. It prompts (api_key="prompt") if no credentials are found, validates (validate=True) the supplied API key, and caches (cache="yes") it for all the runs it manages. The configure_session calls that happen when instantiating Trainer effectively "pass through." I've tested this by installing trainer from this branch in Colab and it is now working as expected.

    opened by mikeknep 0
  • Include dataset name in trainer uploads.

    Include dataset name in trainer uploads.

    Add original file name to data sources uploaded as part of trainer projects. This helps disambiguate the data sources from multiple trainer runs where previously they were always named trainer_0.csv, trainer_1.csv, etc.

    Also fixes StrategyRunner to not silently swallow all ApiExceptions when submitting a job, so errors not associated with max job limit are still thrown and surfaced to the user.

    opened by kboyd 0
  • Auto-determine best model from training data

    Auto-determine best model from training data

    Rather than create a GretelAuto model class that would need to override or work around several _BaseConfig details (validation, max/limit values, etc.), my goal here is to establish the convention that model type is optional and if you don't specify one when instantiating the Trainer, you're OK with us choosing for you. This is a change from the current behavior (optional but default to LSTM). In this case, we defer setting the trainer instance's self.model_type until such time as we can determine the best model to use: namely, at train time when a dataset has been provided.

    I'm a little unclear on the load (from cache) workflow, which in this branch's implementation would set the StrategyRunner's model_config to None. I think this is OK because the only methods referencing that value are part of training (train_all_partitions => train_next_partition => train_partition), and that workflow is only kicked off by the Trainer's train method, which will load in data and use it to determine and set a concrete model.

    I've also added an optional delimiter parameter to train to help support files with non-comma delimiters.

    opened by mikeknep 0
  • Get average sqs score from across partitions

    Get average sqs score from across partitions

    A few ways we could slice and dice this; I figure there may be additional SQS info we want from the run in the future so I decided to expose the entire List[dict] from the runner, and let the trainer pluck out and calculate the first such aggregate, user-friendly data. I'm open to pushing more of this down to the runner and/or transforming the SQS dictionaries into first-class types (likely dataclasses) if anyone has a strong opinion or thinks it'd be useful.

    opened by mikeknep 0
  • Use artifact manifest for determine_best_model.

    Use artifact manifest for determine_best_model.

    Not fully tested. Waiting for new backend API to be available.

    Should revisit retry logic if we can reliably distinguish between a pending manifest (still being generated) and some other error. Or if retrying is included in the gretel_client interface.

    opened by kboyd 1
Releases(v0.5.0)
  • v0.5.0(Nov 18, 2022)

    What's Changed

    • GretelCTGAN has been completely removed, fully replaced by its successor, GretelACTGAN
    • GretelACTGAN uses the new tabular-actgan config by default
    • Benchmark now routes Amplify models through Trainer rather than the SDK
    • Bug fix: helper to properly configure Gretel session before starting Benchmark comparison when unset
    • Bug fix: zero-index Benchmark run ID (internal) to fix off-by-one difference with project name

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.4.1...v0.5.0

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Nov 2, 2022)

    What's Changed

    • Add pip install command and Colab disclaimer to Benchmark notebook by @mikeknep in https://github.com/gretelai/trainer/pull/22
    • Include dataset name in trainer uploads. by @kboyd in https://github.com/gretelai/trainer/pull/21
    • Docs improvements by @MasonEgger (https://github.com/gretelai/trainer/pull/23 https://github.com/gretelai/trainer/pull/24 https://github.com/gretelai/trainer/pull/28 https://github.com/gretelai/trainer/pull/26)
    • Add support for Gretel Amplify by @pimlock in https://github.com/gretelai/trainer/pull/29

    New Contributors

    • @kboyd made their first contribution in https://github.com/gretelai/trainer/pull/21
    • @MasonEgger made their first contribution in https://github.com/gretelai/trainer/pull/23
    • @pimlock made their first contribution in https://github.com/gretelai/trainer/pull/29

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.4.0...v0.4.1

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Oct 6, 2022)

    What's Changed

    • Initial release of new Benchmark module :rocket: by @mikeknep in https://github.com/gretelai/trainer/pull/19
    • Create simple-conditional-generation.ipynb :notebook: by @zredlined in https://github.com/gretelai/trainer/pull/18

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.3.0...v0.4.0

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Aug 30, 2022)

  • v0.2.3(Aug 24, 2022)

    What's Changed

    • The trainer now chooses the best model configuration based on input training data when model_type is not specified in advance at Trainer instantiation (previously defaulted to GretelLSTM)
    • train accepts an optional delimiter argument (defaults to comma when unspecified)
    • Input training data is divided more equally across row partitions
    • LSTM models generate a consistent number of records (5000) during data training (previously matched size of input training data)
    • Fixed trainer generate to synthesize the correct number of records when multiple row partitions are used
    • Fixed trainer get_sqs_score method

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.2.2...v0.2.3

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Aug 11, 2022)

    What's Changed

    • Update default model config by @zredlined in https://github.com/gretelai/trainer/pull/10
    • Remove project delete instruction by @drew in https://github.com/gretelai/trainer/pull/11
    • CTGAN and conditional data generation by @zredlined in https://github.com/gretelai/trainer/pull/12
    • Get average sqs score from across partitions by @mikeknep in https://github.com/gretelai/trainer/pull/14

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.2.1...v0.2.2

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Jun 16, 2022)

  • v0.2.0(Jun 10, 2022)

  • v0.1.0(Jun 10, 2022)

Owner
Gretel.ai
Gretel.ai Open Source Projects and Tools
Gretel.ai
Proximal Backpropagation - a neural network training algorithm that takes implicit instead of explicit gradient steps

Proximal Backpropagation Proximal Backpropagation (ProxProp) is a neural network training algorithm that takes implicit instead of explicit gradient s

Thomas Frerix 40 Dec 17, 2022
A collection of resources, problems, explanations and concepts that are/were important during my Data Science journey

Data Science Gurukul List of resources, interview questions, concepts I use for my Data Science work. Topics: Basics of Programming with Python + Unde

Smaranjit Ghose 10 Oct 25, 2022
Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"

merlot_reserve Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound" MERLOT Reserve (in submission) is a mo

Rowan Zellers 92 Dec 11, 2022
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.

CycleGAN PyTorch | project page | paper Torch implementation for learning an image-to-image translation (i.e. pix2pix) without input-output pairs, for

Jun-Yan Zhu 11.5k Dec 30, 2022
A distributed deep learning framework that supports flexible parallelization strategies.

FlexFlow FlexFlow is a deep learning framework that accelerates distributed DNN training by automatically searching for efficient parallelization stra

528 Dec 25, 2022
Dynamic Bottleneck for Robust Self-Supervised Exploration

Dynamic Bottleneck Introduction This is a TensorFlow based implementation for our paper on "Dynamic Bottleneck for Robust Self-Supervised Exploration"

Bai Chenjia 4 Nov 14, 2022
Main repository for the HackBio'2021 Virtual Internship Experience for #Team-Greider ❤️

Hello 🤟 #Team-Greider The team of 20 people for HackBio'2021 Virtual Bioinformatics Internship 💝 🖨️ 👨‍💻 HackBio: https://thehackbio.com 💬 Ask us

Siddhant Sharma 7 Oct 20, 2022
Raster Vision is an open source Python framework for building computer vision models on satellite, aerial, and other large imagery sets

Raster Vision is an open source Python framework for building computer vision models on satellite, aerial, and other large imagery sets (including obl

Azavea 1.7k Dec 22, 2022
Code associated with the paper "Deep Optics for Single-shot High-dynamic-range Imaging"

Deep Optics for Single-shot High-dynamic-range Imaging Code associated with the paper "Deep Optics for Single-shot High-dynamic-range Imaging" CVPR, 2

Stanford Computational Imaging Lab 40 Dec 12, 2022
TCube generates rich and fluent narratives that describes the characteristics, trends, and anomalies of any time-series data (domain-agnostic) using the transfer learning capabilities of PLMs.

TCube: Domain-Agnostic Neural Time series Narration This repository contains the code for the paper: "TCube: Domain-Agnostic Neural Time series Narrat

Mandar Sharma 7 Oct 31, 2021
Tutorials and implementations for "Self-normalizing networks"

Self-Normalizing Networks Tutorials and implementations for "Self-normalizing networks"(SNNs) as suggested by Klambauer et al. (arXiv pre-print). Vers

Institute of Bioinformatics, Johannes Kepler University Linz 1.6k Jan 07, 2023
Synthetic LiDAR sequential point cloud dataset with point-wise annotations

SynLiDAR dataset: Learning From Synthetic LiDAR Sequential Point Cloud This is official repository of the SynLiDAR dataset. For technical details, ple

78 Dec 27, 2022
Repository for the paper : Meta-FDMixup: Cross-Domain Few-Shot Learning Guided byLabeled Target Data

1 Meta-FDMIxup Repository for the paper : Meta-FDMixup: Cross-Domain Few-Shot Learning Guided byLabeled Target Data. (ACM MM 2021) paper News! the rep

Fu Yuqian 44 Nov 18, 2022
Some simple programs built in Python: webcam with cv2 that detects eyes and face, with grayscale filter

Programas en Python Algunos programas simples creados en Python: 📹 Webcam con c

Madirex 1 Feb 15, 2022
Learning Energy-Based Models by Diffusion Recovery Likelihood

Learning Energy-Based Models by Diffusion Recovery Likelihood Ruiqi Gao, Yang Song, Ben Poole, Ying Nian Wu, Diederik P. Kingma Paper: https://arxiv.o

Ruiqi Gao 41 Nov 22, 2022
Official PyTorch Implementation of Convolutional Hough Matching Networks, CVPR 2021 (oral)

Convolutional Hough Matching Networks This is the implementation of the paper "Convolutional Hough Matching Network" by J. Min and M. Cho. Implemented

Juhong Min 70 Nov 22, 2022
Utilities to bridge Canvas-generated course rosters with GitLab's API.

gitlab-canvas-utils A collection of scripts originally written for CSE 13S. Oversees everything from GitLab course group creation, student repository

Eugene Chou 5 Jun 08, 2022
[ICCV 2021] Excavating the Potential Capacity of Self-Supervised Monocular Depth Estimation

EPCDepth EPCDepth is a self-supervised monocular depth estimation model, whose supervision is coming from the other image in a stereo pair. Details ar

Rui Peng 110 Dec 23, 2022
Doing the asl sign language classification on static images using graph neural networks.

SignLangGNN When GNNs 💜 MediaPipe. This is a starter project where I tried to implement some traditional image classification problem i.e. the ASL si

10 Nov 09, 2022
Towards Representation Learning for Atmospheric Dynamics (AtmoDist)

Towards Representation Learning for Atmospheric Dynamics (AtmoDist) The prediction of future climate scenarios under anthropogenic forcing is critical

Sebastian Hoffmann 4 Dec 15, 2022