Generate lookml for views from dbt models

Overview

dbt2looker

Use dbt2looker to generate Looker view files automatically from dbt models.

Features

  • Column descriptions synced to looker
  • Dimension for each column in dbt model
  • Dimension groups for datetime/timestamp/date columns
  • Measures defined through dbt column metadata see below
  • Looker types
  • Warehouses: BigQuery, Snowflake, Redshift (postgres to come)

demo

Quickstart

Run dbt2looker in the root of your dbt project after compiling looker docs.

Generate Looker view files for all models:

dbt docs generate
dbt2looker

Generate Looker view files for all models tagged prod

dbt2looker --tag prod

Install

Install from PyPi repository

Install from pypi into a fresh virtual environment.

# Create virtual env
python3.7 -m venv dbt2looker-venv
source dbt2looker-venv/bin/activate

# Install
pip install dbt2looker

# Run
dbt2looker

Build from source

Requires poetry and python >=3.7

# Install
poetry install

# Run
poetry run dbt2looker

Defining measures

You can define looker measures in your dbt schema.yml files. For example:

models:
  - name: pages
    columns:
      - name: url
        description: "Page url"
      - name: event_id
        description: unique event id for page view
        meta:
           measures:
             page_views:
               type: count
Comments
  • Column Type None Error - Field's Not Converting To Dimensions

    Column Type None Error - Field's Not Converting To Dimensions

    When running dbt2looker --tag marts on my mart models, I receive dozens of errors around none type conversions.

    20:54:28 WARNING Column type None not supported for conversion from snowflake to looker. No dimension will be created.

    Here is the example of the schema.yml file.

    image

    The interesting thing is that it correctly recognizes the doc that corresponds to the model. The explore within the model file is correct and has the correct documentation.

    Not sure if I can be of any more help but let me know if there is anything!

    bug 
    opened by sisu-callum 19
  • ValueError: Failed to parse dbt manifest.json

    ValueError: Failed to parse dbt manifest.json

    Hey! I'm trying to run this package and hitting errors right after installation. I pip installed dbt2looker, ran the following in the root of my dbt project.

    dbt docs generate
    dbt2looker
    

    This gives me the following error:

    Traceback (most recent call last): File "/Users/josh/.pyenv/versions/3.10.0/bin/dbt2looker", line 8, in sys.exit(run()) File "/Users/josh/.pyenv/versions/3.10.0/lib/python3.10/site-packages/dbt2looker/cli.py", line 108, in run raw_manifest = get_manifest(prefix=args.target_dir) File "/Users/josh/.pyenv/versions/3.10.0/lib/python3.10/site-packages/dbt2looker/cli.py", line 33, in get_manifest parser.validate_manifest(raw_manifest) File "/Users/josh/.pyenv/versions/3.10.0/lib/python3.10/site-packages/dbt2looker/parser.py", line 20, in validate_manifest raise ValueError("Failed to parse dbt manifest.json") ValueError: Failed to parse dbt manifest.json

    This is preceded by a whole mess of error messages like such:

    21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.created_at: 1639274126.771925 is not of type 'integer' 21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.resource_type: 'model' is not one of ['analysis'] 21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.created_at: 1639274126.771925 is not of type 'integer' 21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.resource_type: 'model' is not one of ['test']

    Any idea what might be going wrong here? Happy to provide more detail. Thank you!

    opened by jdavid459 6
  • DBT version 1.0

    DBT version 1.0

    Hi,

    Is this library supporting DBT version 1.0 and forward? I can't get it to run at all. There's a lot of errors when checking the schema of the manifest.json file.

    / Andrea

    opened by AndreasTA-AW 3
  • Multiple manifest.json/catalog.json/dbt_project.yml files found in path ./

    Multiple manifest.json/catalog.json/dbt_project.yml files found in path ./

    When running

    dbt2looker --tag test
    

    I get

    $ dbt2looker --tag test
    19:31:20 WARNING Multiple manifest.json files found in path ./ this can lead to unexpected behaviour
    19:31:20 WARNING Multiple catalog.json files found in path ./ this can lead to unexpected behaviour
    19:31:20 WARNING Multiple dbt_project.yml files found in path ./ this can lead to unexpected behaviour
    19:31:20 INFO   Generated 0 lookml views in ./lookml/views
    19:31:20 INFO   Generated 1 lookml model in ./lookml
    19:31:20 INFO   Success
    

    and no lookml files are generated.

    I assume this is because I have multiple dbt packages installed? Is there a way to get around this? Otherwise, a feature request would be the ability to specify which files should be used - perhaps in a separate dbt2looker.yml settings file.

    enhancement 
    opened by arniwesth 3
  • Support Bigquery BIGNUMERIC datatype

    Support Bigquery BIGNUMERIC datatype

    Previously, dbt2looker would not create dimension for field with data type BIGNUMERIC since Looker didn't support converting BIGNUMERIC. So when we ran dbt2looker in CLI there is a warning WARNING Column type BIGNUMERIC not supported for conversion from bigquery to looker. No dimension will be created. However, as of November 2021, Looker has officially supported BigQuery BIGNUMBERIC (link). Please help to add this. Thank you,

    opened by IL-Jerry 2
  • Adding Filters to Meta Looker Config in schema.yml

    Adding Filters to Meta Looker Config in schema.yml

    Use Case: Given that programatic creation of all LookML files is the goal, there are a couple features that could potentially be added in order to give people more flexibility in measure creation. The first one I could think of was filters. Individuals would use filters to calculate measures like Active Users (ex: count_distinct user ids where some sort of flag is true).

    The following code is my admitted techno-babble as I don't fully understand pydantic and my python is almost exclusively pandas based.

    def lookml_dimensions_from_model(model: models.DbtModel, adapter_type: models.SupportedDbtAdapters):
        return [
            {
                'name': column.name,
                'type': map_adapter_type_to_looker(adapter_type, column.data_type),
                'sql': f'${{TABLE}}.{column.name}',
                'description': column.description
                'filter':[measure.name: f'measure.value']
    
            }
            for column in model.columns.values()
            for filter in column.meta.looker.filters
            if map_adapter_type_to_looker(adapter_type, column.data_type) in looker_scalar_types
        ]
    
    
    def lookml_measures_from_model(model: models.DbtModel):
        return [
            {
                'name': measure.name,
                'type': measure.type.value,
                'sql': f'${{TABLE}}.{column.name}',
                'description': f'{measure.type.value.capitalize()} of {column.description}',
                **'filter':[measure.name: f'measure.value']**
            }
            for column in model.columns.values()
            for measure in column.meta.looker.measures
            **for filter in column.meta.looker.filters**
    
        ]
    

    Pretty obvious I would imagine that my Python skills are lacking/non-existent (and I have no idea if this would actually work) but this idea would add more functionality for those who want to create more dynamic measures. Here is a bare-bones idea of how it could be configured in dbt

    image

    Then the output would look something like.

      measure: Page views {
        type: count
        sql: ${TABLE}.relevant_field ;;
        description: "Count of something."
        filter: [the_name_of_defined_column: value_of_defined_column]
      }
    
    enhancement 
    opened by sisu-callum 2
  • Incompatible packages when using snowflake

    Incompatible packages when using snowflake

    This error comes up when using with snowflake: https://github.com/snowflakedb/snowflake-connector-python/issues/1206

    it is remedied by the simple line pip install typing-extensions>=4.3.0 , but dbt2looker depends on < 4.0.0.

    dbt2looker 0.9.2 requires typing-extensions<4.0.0,>=3.10.0, but you have typing-extensions 4.3.0 which is incompatible.
    
    opened by owlas 1
  • Allowing skipping dbt manifest validation.

    Allowing skipping dbt manifest validation.

    Some users use the manifest heavily in order to enhance their work with dbt. IMHO, in such cases, the Looker library should not enforce any schema validations and it is the users' responsibility to keep the Looker generation not broken.

    opened by cgrosman 1
  • Redshift type conversions missing

    Redshift type conversions missing

    Redshift has missing type conversions:

    10:07:17 WARNING Column type timestamp without time zone not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 WARNING Column type boolean not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 WARNING Column type double precision not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 WARNING Column type character varying(108) not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 DEBUG  Created view from model dim_appointment with 0 measures, 0 dimensions
    
    bug 
    opened by owlas 1
  • Join models in explores

    Join models in explores

    Expose config for defining explores with joined models.

    Ideally this would live in a dbt exposure but it's currently missing meta information.

    Add to models for now?

    enhancement 
    opened by owlas 1
  • feat: remove strict manifest validation

    feat: remove strict manifest validation

    Closes #72 Closes #37

    We have some validation already with typing and the dbt manifest keeps changing. I think json-schema is causing more problems that it is solving. If we get weird errors, we can introduce some more relaxed validation.

    opened by owlas 0
  • Support group_labels in yml for dimensions

    Support group_labels in yml for dimensions

    https://github.com/lightdash/dbt2looker/blob/bb8f5b485ec541e2b1be15363ac3c7f8f19d030d/dbt2looker/models.py#L99

    measures seem to have this but not dimensions. probably all/most properties in available in https://docs.lightdash.com/references/dimensions/ should be represented here -- is this something lightdash is willing to maintain or would you want a contribution? @TuringLovesDeathMetal / @owlas - i figure full support for lightdash properties that can map to looker should be, in order to maximize the value of this utility for enabling looker customers to uncouple themselves from looker.

    opened by mike-weinberg 1
  • Issue when parsing dbt models

    Issue when parsing dbt models

    Hey folks!

    I've just run 'dbt2looker' in my local dbt repo folder, and I receive the following error:

    ❯ dbt2looker
    12:11:54 ERROR  Cannot parse model with id: "model.smallpdf.brz_exchange_rates" - is the model file empty?
    Failed
    

    The model file itself (pictured below) is not empty, therefore I am not sure what the issue with parsing this model dbt2looker appears to have. It is not materialised as a table or view, it is utilised by dbt as ephemeral - is that of importance when parsing files in the project? I've also tried running dbt2looker on a limited subset of dbt models via a tag, the same error appears. Any help is greatly appreciated!

    Screenshot 2022-06-20 at 12 12 22

    Other details:

    • on dbt version dbt 1.0.0
    • using dbt-redshift adapter [email protected]
    • let me know if anything else is of importance!
    opened by lewisosborne 8
  • Support model level measures

    Support model level measures

    Motivation

    We technically implement a measure with multiple columns under a column meta. But, it would be more natural to implement such measures as model-level.

    models:
      - name: ubie_jp_lake__dm_medico__hourly_score_for_nps
        description: |
          {{ doc("ubie_jp_lake__dm_medico__hourly_score_for_nps") }}
        meta:
          measures:
            total_x_y_z:
              type: number
              description: 'Summation of total x, total y and total z'
              sql: '${total_x} + ${total_y} + ${total_z}'
    
    
    opened by yu-iskw 0
  • Lookml files should merge with existing views

    Lookml files should merge with existing views

    If I already have a view file, I'd like to merge in any new columns I've added in dbt.

    For example, if I have a description in dbt but not in looker, I'd like to add it

    If looker already has a description, it should be left alone

    Thread in dbt slack: https://getdbt.slack.com/archives/C01DPMVM2LU/p1650353949839609?thread_ts=1649968691.671229&cid=C01DPMVM2LU

    opened by owlas 0
  • Non-empty models cannot be parsed and are reported as empty

    Non-empty models cannot be parsed and are reported as empty

    As of version 0.9.2, dbt2looker will not run for us anymore. v0.7.0 does run successfully. The error returned by 0.9.2 is 'Cannot parse model with id: "%s" - is the model file empty?'. However, the model that this is returned for is not empty. Based on the code, it seems like the attribute 'name' is missing, but inspecting the manifest.json file shows that there is actually a name for this model. I have no idea why the system reports these models as empty. The manifest.json object for one of the offending models is pasted below.

    Reverting to v0.9.0 (which does not yet have this error message) just leads to dbt2looker crashing without any information. Reverting to 0.7.0 fixes the problem. This issue effectively locks us (and likely others) into using an old version of dbt2looker

    "model.zivver_dwh.crm_account_became_customer_dates":
            {
                "raw_sql": "WITH sfdc_accounts AS (\r\n\r\n    SELECT * FROM {{ ref('stg_sfdc_accounts') }}\r\n\r\n), crm_opportunities AS (\r\n\r\n    SELECT * FROM {{ ref('crm_opportunities') }}\r\n\r\n), crm_account_lifecycle_stage_changes_into_customer_observed AS (\r\n\r\n    SELECT\r\n        *\r\n    FROM {{ ref('crm_account_lifecycle_stage_changes_observed') }}\r\n    WHERE\r\n        new_stage = 'CUSTOMER'\r\n\r\n), became_customer_dates_from_opportunities AS (\r\n\r\n    SELECT\r\n        crm_account_id AS sfdc_account_id,\r\n\r\n        -- An account might have multiple opportunities. The account became customer when the first one was closed won.\r\n        MIN(closed_at) AS became_customer_at\r\n    FROM crm_opportunities\r\n    WHERE\r\n        opportunity_stage = 'CLOSED_WON'\r\n    GROUP BY\r\n        1\r\n\r\n), became_customer_dates_observed AS (\r\n\r\n    -- Some accounts might not have closed won opportunities, but still be a customer. Examples would be Connect4Care\r\n    -- customers, which have a single opportunity which applies to multiple accounts. If an account is manually set\r\n    -- to customer, this should also count as a customer.\r\n    --\r\n    -- We try to get the date at which they became a customer from the property history. Since that wasn't on from\r\n    -- the beginning, we conservatively default to either the creation date of the account or the history tracking\r\n    -- start date, whichever was earlier. Please note that this case should be exceedingly rare.\r\n    SELECT\r\n        sfdc_accounts.sfdc_account_id,\r\n        CASE\r\n            WHEN {{ var('date:sfdc:account_history_tracking:start_date') }} <= sfdc_accounts.created_at\r\n                THEN sfdc_accounts.created_at\r\n            ELSE {{ var('date:sfdc:account_history_tracking:start_date') }}\r\n        END AS default_became_customer_date,\r\n\r\n        COALESCE(\r\n            MIN(crm_account_lifecycle_stage_changes_into_customer_observed.new_stage_entered_at),\r\n            default_became_customer_date\r\n        ) AS became_customer_at\r\n\r\n    FROM sfdc_accounts\r\n    LEFT JOIN crm_account_lifecycle_stage_changes_into_customer_observed\r\n        ON sfdc_accounts.sfdc_account_id = crm_account_lifecycle_stage_changes_into_customer_observed.sfdc_account_id\r\n    WHERE\r\n        sfdc_accounts.lifecycle_stage = 'CUSTOMER'\r\n    GROUP BY\r\n        1,\r\n        2\r\n\r\n)\r\nSELECT\r\n    COALESCE(became_customer_dates_from_opportunities.sfdc_account_id,\r\n        became_customer_dates_observed.sfdc_account_id) AS sfdc_account_id,\r\n    COALESCE(became_customer_dates_from_opportunities.became_customer_at,\r\n        became_customer_dates_observed.became_customer_at) AS became_customer_at\r\nFROM became_customer_dates_from_opportunities\r\nFULL OUTER JOIN became_customer_dates_observed\r\n    ON became_customer_dates_from_opportunities.sfdc_account_id = became_customer_dates_observed.sfdc_account_id",
                "resource_type": "model",
                "depends_on":
                {
                    "macros":
                    [
                        "macro.zivver_dwh.ref",
                        "macro.zivver_dwh.audit_model_deployment_started",
                        "macro.zivver_dwh.audit_model_deployment_completed",
                        "macro.zivver_dwh.grant_read_rights_to_role"
                    ],
                    "nodes":
                    [
                        "model.zivver_dwh.stg_sfdc_accounts",
                        "model.zivver_dwh.crm_opportunities",
                        "model.zivver_dwh.crm_account_lifecycle_stage_changes_observed"
                    ]
                },
                "config":
                {
                    "enabled": true,
                    "materialized": "ephemeral",
                    "persist_docs":
                    {},
                    "vars":
                    {},
                    "quoting":
                    {},
                    "column_types":
                    {},
                    "alias": null,
                    "schema": "bl",
                    "database": null,
                    "tags":
                    [
                        "business_layer",
                        "commercial"
                    ],
                    "full_refresh": null,
                    "crm_record_types": null,
                    "post-hook":
                    [
                        {
                            "sql": "{{ audit_model_deployment_completed() }}",
                            "transaction": true,
                            "index": null
                        },
                        {
                            "sql": "{{ grant_read_rights_to_role('data_engineer', ['all']) }}",
                            "transaction": true,
                            "index": null
                        },
                        {
                            "sql": "{{ grant_read_rights_to_role('analyst', ['all']) }}",
                            "transaction": true,
                            "index": null
                        }
                    ],
                    "pre-hook":
                    [
                        {
                            "sql": "{{ audit_model_deployment_started() }}",
                            "transaction": true,
                            "index": null
                        }
                    ]
                },
                "database": "analytics",
                "schema": "bl",
                "fqn":
                [
                    "zivver_dwh",
                    "business_layer",
                    "commercial",
                    "crm_account_lifecycle_stage_changes",
                    "intermediates",
                    "crm_account_became_customer_dates",
                    "crm_account_became_customer_dates"
                ],
                "unique_id": "model.zivver_dwh.crm_account_became_customer_dates",
                "package_name": "zivver_dwh",
                "root_path": "C:\\Users\\tjebbe.bodewes\\Documents\\zivver-dwh\\dwh\\transformations",
                "path": "business_layer\\commercial\\crm_account_lifecycle_stage_changes\\intermediates\\crm_account_became_customer_dates\\crm_account_became_customer_dates.sql",
                "original_file_path": "models\\business_layer\\commercial\\crm_account_lifecycle_stage_changes\\intermediates\\crm_account_became_customer_dates\\crm_account_became_customer_dates.sql",
                "name": "crm_account_became_customer_dates",
                "alias": "crm_account_became_customer_dates",
                "checksum":
                {
                    "name": "sha256",
                    "checksum": "a037b5681219d90f8bf8d81641d3587f899501358664b8ec77168901b3e1808b"
                },
                "tags":
                [
                    "business_layer",
                    "commercial"
                ],
                "refs":
                [
                    [
                        "stg_sfdc_accounts"
                    ],
                    [
                        "crm_opportunities"
                    ],
                    [
                        "crm_account_lifecycle_stage_changes_observed"
                    ]
                ],
                "sources":
                [],
                "description": "",
                "columns":
                {
                    "sfdc_account_id":
                    {
                        "name": "sfdc_account_id",
                        "description": "",
                        "meta":
                        {},
                        "data_type": null,
                        "quote": null,
                        "tags":
                        []
                    },
                    "became_customer_at":
                    {
                        "name": "became_customer_at",
                        "description": "",
                        "meta":
                        {},
                        "data_type": null,
                        "quote": null,
                        "tags":
                        []
                    }
                },
                "meta":
                {},
                "docs":
                {
                    "show": true
                },
                "patch_path": "zivver_dwh://models\\business_layer\\commercial\\crm_account_lifecycle_stage_changes\\intermediates\\crm_account_became_customer_dates\\crm_account_became_customer_dates.yml",
                "compiled_path": null,
                "build_path": null,
                "deferred": false,
                "unrendered_config":
                {
                    "pre-hook":
                    [
                        "{{ audit_model_deployment_started() }}"
                    ],
                    "post-hook":
                    [
                        "{{ grant_read_rights_to_role('analyst', ['all']) }}"
                    ],
                    "tags":
                    [
                        "commercial"
                    ],
                    "materialized": "ephemeral",
                    "schema": "bl",
                    "crm_record_types": null
                },
                "created_at": 1637233875
            }
    
    opened by Tbodewes 2
Releases(v0.11.0)
  • v0.11.0(Dec 1, 2022)

    Added

    • support label and hidden fields (#49)
    • support non-aggregate measures (#41)
    • support bytes and bignumeric for bigquery (#75)
    • support for custom connection name on the cli (#78)

    Changed

    • updated dependencies (#74)

    Fixed

    • Types maps for redshift (#76)

    Removed

    • Strict manifest validation (#77)
    Source code(tar.gz)
    Source code(zip)
  • v0.9.2(Oct 11, 2021)

  • v0.9.1(Oct 7, 2021)

    Fixed

    • Fixed bug where dbt2looker would crash if a dbt project contained an empty model

    Changed

    • When filtering models by tag, models that have no tag property will be ignored
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Oct 7, 2021)

    Added

    • Support for spark adapter (@chaimt)

    Changed

    • Updated with support for dbt2looker (@chaimt)
    • Lookml views now populate their "sql_table_name" using the dbt relation name
    Source code(tar.gz)
    Source code(zip)
  • v0.8.2(Sep 22, 2021)

    Changed

    • Measures with missing descriptions fall back to coloumn descriptions. If there is no column description it falls back to "{measure_type} of {column_name}".
    Source code(tar.gz)
    Source code(zip)
  • v0.8.1(Sep 22, 2021)

    Added

    • Dimensions have an enabled flag that can be used to switch off generated dimensions for certain columns with enabled: false
    • Measures have been aliased with the following: measures,measure,metrics,metric

    Changed

    • Updated dependencies
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(Sep 9, 2021)

    Changed

    • Command line interface changed argument from --target to --target-dir

    Added

    • Added the --project-dir flag to the command line interface to change the search directory for dbt_project.yml
    Source code(tar.gz)
    Source code(zip)
  • v0.7.3(Sep 9, 2021)

  • v0.7.2(Sep 9, 2021)

  • v0.7.1(Aug 27, 2021)

    Added

    • Use dbt2looker --output-dir /path/to/dir to customise the output directory of the generated lookml files

    Fixed

    • Fixed error with reporting json validation errors
    • Fixed error in join syntax in example .yml file
    • Fixed development environment for python3.7 users
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Apr 18, 2021)

  • v0.6.2(Apr 18, 2021)

  • v0.6.1(Apr 17, 2021)

  • v0.6.0(Apr 17, 2021)

Owner
lightdash
lightdash
Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data.

Hatchet Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data. It is intended for analyzing

Lawrence Livermore National Laboratory 14 Aug 19, 2022
In this tutorial, raster models of soil depth and soil water holding capacity for the United States will be sampled at random geographic coordinates within the state of Colorado.

Raster_Sampling_Demo (Resulting graph of this demo) Background Sampling values of a raster at specific geographic coordinates can be done with a numbe

2 Dec 13, 2022
Single-Cell Analysis in Python. Scales to >1M cells.

Scanpy – Single-Cell Analysis in Python Scanpy is a scalable toolkit for analyzing single-cell gene expression data built jointly with anndata. It inc

Theis Lab 1.4k Jan 05, 2023
DaDRA (day-druh) is a Python library for Data-Driven Reachability Analysis.

DaDRA (day-druh) is a Python library for Data-Driven Reachability Analysis. The main goal of the package is to accelerate the process of computing estimates of forward reachable sets for nonlinear dy

2 Nov 08, 2021
Provide a market analysis (R)

market-study Provide a market analysis (R) - FRENCH Produisez une étude de marché Prérequis Pour effectuer ce projet, vous devrez maîtriser la manipul

1 Feb 13, 2022
Two phase pipeline + StreamlitTwo phase pipeline + Streamlit

Two phase pipeline + Streamlit This is an example project that demonstrates how to create a pipeline that consists of two phases of execution. In betw

Rick Lamers 1 Nov 17, 2021
ICLR 2022 Paper submission trend analysis

Visualize ICLR 2022 OpenReview Data

Jintang Li 75 Dec 06, 2022
This mini project showcase how to build and debug Apache Spark application using Python

Spark app can't be debugged using normal procedure. This mini project showcase how to build and debug Apache Spark application using Python programming language. There are also options to run Spark a

Denny Imanuel 1 Dec 29, 2021
cLoops2: full stack analysis tool for chromatin interactions

cLoops2: full stack analysis tool for chromatin interactions Introduction cLoops2 is an extension of our previous work, cLoops. From loop-calling base

YaqiangCao 25 Dec 14, 2022
PyIOmica (pyiomica) is a Python package for omics analyses.

PyIOmica (pyiomica) This repository contains PyIOmica, a Python package that provides bioinformatics utilities for analyzing (dynamic) omics datasets.

G. Mias Lab 13 Jun 29, 2022
This program analyzes a DNA sequence and outputs snippets of DNA that are likely to be protein-coding genes.

This program analyzes a DNA sequence and outputs snippets of DNA that are likely to be protein-coding genes.

1 Dec 28, 2021
track your GitHub statistics

GitHub-Stalker track your github statistics 👀 features find new followers or unfollowers find who got a star on your project or remove stars find who

Bahadır Araz 34 Nov 18, 2022
GWpy is a collaboration-driven Python package providing tools for studying data from ground-based gravitational-wave detectors

GWpy is a collaboration-driven Python package providing tools for studying data from ground-based gravitational-wave detectors. GWpy provides a user-f

GWpy 342 Jan 07, 2023
Python library for creating data pipelines with chain functional programming

PyFunctional Features PyFunctional makes creating data pipelines easy by using chained functional operators. Here are a few examples of what it can do

Pedro Rodriguez 2.1k Jan 05, 2023
A set of tools to analyse the output from TraDIS analyses

QuaTradis (Quadram TraDis) A set of tools to analyse the output from TraDIS analyses Contents Introduction Installation Required dependencies Bioconda

Quadram Institute Bioscience 2 Feb 16, 2022
Flenser is a simple, minimal, automated exploratory data analysis tool.

Flenser Have you ever been handed a dataset you've never seen before? Flenser is a simple, minimal, automated exploratory data analysis tool. It runs

John McCambridge 79 Sep 20, 2022
songplays datamart provide details about the musical taste of our customers and can help us to improve our recomendation system

Songplays User activity datamart The following document describes the model used to build the songplays datamart table and the respective ETL process.

Leandro Kellermann de Oliveira 1 Jul 13, 2021
Sample code for Harry's Airflow online trainng course

Sample code for Harry's Airflow online trainng course You can find the videos on youtube or bilibili. I am working on adding below things: the slide p

102 Dec 30, 2022
An Integrated Experimental Platform for time series data anomaly detection.

Curve Sorry to tell contributors and users. We decided to archive the project temporarily due to the employee work plan of collaborators. There are no

Baidu 486 Dec 21, 2022
Powerful, efficient particle trajectory analysis in scientific Python.

freud Overview The freud Python library provides a simple, flexible, powerful set of tools for analyzing trajectories obtained from molecular dynamics

Glotzer Group 195 Dec 20, 2022