Write python locally, execute SQL in your data warehouse

Overview

Downloads PyPI version Docs Chat on Slack License: AGPL v3

RasgoQL Hero

RasgoQL

Write python locally, execute SQL in your data warehouse
≪ Read the Docs   ·   Join Our Slack »

RasgoQL is a Python package that enables you to easily query and transform tables in your Data Warehouse directly from a notebook.

You can quickly create new features, sample data, apply complex aggregates... all without having to write SQL!

Choose from our library of predefined transformations or make your own to streamline the feature engineering process.

RasgoQL 30-second demo

Why is this package useful?

Data scientists spend much of their time in pandas preparing data for modelling. When they are ready to deploy or scale, two pain points arise:

  1. pandas cannot handle larger volumes of data, forcing the use of VMs or code refactoring.
  2. feature data must be added to the Enterprise Data Warehouse for future processing, requiring refactoring to SQL

We created RasgoQL to solve these two pain points.

Learn more at https://docs.rasgoql.com.

How does it work?

Under the covers, RasgoQL sends all processing to your Data Warehouse, enabling the efficient transformation of massive datasets. RasgoQL only needs basic metadata to execute transforms, so your private data remains secure.

RasgoQL workflow diagram

RasgoQL does these things well:

  • Pulls existing Data Warehouse tables into pandas DataFrames for analysis
  • Constructs SQL queries using a syntax that feels like pandas
  • Creates views in your Data Warehouse to save transformed data
  • Exports runnable sql in .sql files or dbt-compliant .yaml files
  • Offers dozens of free SQL transforms to use
  • Coming Soon: allows users to create & add custom transforms

Rasgo supports Snowflake, BigQuery, Postgres, and Amazon Redshift with more Data Warehouses being added soon. If you'd like to suggest another database type, submit your idea to our GitHub Discussions page so that other community members can weight in and show their support.

Can RasgoQL help you?

  • If you use pandas to build features, but you are working on a massive set of data that won't fit in your machine's memory. RasgoQL can help!

  • If your organization uses dbt or another SQL tool to run production data flows, but you prefer to build features in pandas. RasgoQL can help!

  • If you know pandas, but not SQL and want to learn how queries will translate. RasgoQL can help!

Where to get it

Just run a simple pip install.

pip install rasgoql~=1.0

Report Bug · Suggest Improvement · Request Feature

Quick Start

pip install rasgoql --upgrade

# Connect to your data warehouse
creds = rasgoql.SnowflakeCredentials(
    account="",
    user="",
    password="",
    role="",
    warehouse="",
    database="",
    schema=""
)

# Connect to DW
rql = rasgoql.connect(creds)

# List available tables
rql.list_tables('ADVENTUREWORKS').head(10)

# Allow rasgoQL to interact with an existing Table in your Data Warehouse
dataset = rql.dataset('ADVENTUREWORKS.PUBLIC.FACTINTERNETSALES')

# Take a peek at the data
dataset.preview()

# Use the datetrunc transform to seperate things into weeks
weekly_sales = dataset.datetrunc(dates={'ORDERDATE':'week'})

# Aggregate to sum of sales for each week
agg_weekly_sales = weekly_sales.aggregate(
    group_by=['PRODUCTKEY', 'ORDERDATE_WEEK'],
    aggregations={'SALESAMOUNT': ['SUM']},
    )

# Quickly validate output
agg_weekly_sales.to_df()

# Print the SQL
print(agg_weekly_sales.sql())

Getting Stared Tutorials

The best way to get familiar with the RasgoQL basics is by running through these notebooks in the tutorials folder.

Advanced Examples

Joins

Easily join tables together using the join transform.

sales_dataset = rasgoql.dataset('ADVENTUREWORKS.PUBLIC.FACTINTERNETSALES')

sales_product_dataset = sales_dataset.join(
  join_table='DIM_PRODUCT',
  join_columns={'PRODUCTKEY': 'PRODUCTKEY'},
  join_type='LEFT',
  join_prefix='PRODUCT')

sales_product_dataset.sql()
sales_product_dataset.preview()

Rasgo Join Example

Chain transforms together

Create a rolling average aggregation and then drops unnecessary colomns.

sales_agg_drop = sales_dataset.rolling_agg(
    aggregations={"SALESAMOUNT": ["MAX", "MIN", "SUM"]},
    order_by="ORDERDATE",
    offsets=[-7, 7],
    group_by=["PRODUCTKEY"],
).drop_columns(exclude_cols=["ORDERDATEKEY"])

sales_agg_drop.sql()
sales_agg_drop.preview()

Multiple rasgoql transforms

Transpose unique values with pivots

Quickly generate pivot tables of your data.

sales_by_product = sales_dataset.pivot(
    dimensions=['ORDERDATE'],
    pivot_column='SALESAMOUNT',
    value_column='PRODUCTKEY',
    agg_method='SUM',
    list_of_vals=['310', '345'],
)

sales_by_product.sql()
sales_by_product.preview()

Rasgoql pivot example

Does any of my data get collected?

Rasgo will not collect any personal information. We log execution of methods in transforms.py for success and failure so that we can more accurately track what's useful and what's problematic.

Where do I go for help?

If you have any questions please:

  1. RasgoQL Docs
  2. Slack
  3. GitHub Issues

How can I contribute?

Review the contributors guide

License

RasgoQL uses the GNU AGPL license, as found in the LICENSE file.

This project is sponspored by RasgoML. Find out at https://www.rasgoml.com/

Comments
  • [BigQuery] fqtn is not valid if project name contains '-'

    [BigQuery] fqtn is not valid if project name contains '-'

    Hello,

    The fqtn of the table I want to get is following this pattern : my-awesome-project.schema.table. I tried to get it using rql.dataset(fqtn="my-awesome-project.schema.table") but I get a [ValueError: my-awesome-project.schema.table is not a well-formed fqtn](). It seems that the validate_fqtn() function is applying this regex \w+\.\w+\.\w+ that isn't accepting my GCP project name pattern. Is there a way to make this work without changing my GCP project name ?

    Thank you for this awesome package, I can't wait to try it ! ❤️ 🚀

    bug 
    opened by amirbtb 10
  • [BigQuery] `to_dbt()` raises IndexError

    [BigQuery] `to_dbt()` raises IndexError

    Hi,

    I am trying to generate dbt files (.sql & .yml) from a SQLChain using to_dbt(). The source table is a regular table. I'm using rasgoql 1.0.2a2.

    Here is my code, I'm just trying to generate base sql code for casting the table :

    import rasgoql
    from rasgoql import BigQueryCredentials
    
    PROJECT = "my-project"
    DATASET = "dataset"
    
    creds = BigQueryCredentials(
        json_filepath="/credentials/path",
        project=PROJECT,
        dataset=DATASET
    )
    rql = rasgoql.connect(creds)
    
    ds = rql.dataset(fqtn=f"{PROJECT}.{DATASET}.table")
    
    schema_dict = {column:data_type for column, data_type in ds.get_schema()}
    schema_dict
    
    ds_casted = ds.transform(
      transform_name='cast',
      casts=schema_dict
    )
    
    ds_casted.to_dbt('./test')
    
    

    I get the following error :

    ---------------------------------------------------------------------------
    IndexError                                Traceback (most recent call last)
    /src/test.ipynb Cell [1]
    ----> 1[ ds_casted.to_dbt('./test')
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py:40, in beta.<locals>.wrapper(*args, **kwargs)
         ]()[30](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=29)[ @functools.wraps(func)
         ]()[31](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=30)[ def wrapper(*args, **kwargs):
         ]()[32](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=31)[     logger.info(
         ]()[33](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=32)[         f'{func.__name__} is a beta feature. '
         ]()[34](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=33)[         'Its functionality and parameters may change in future versions and '
       (...)
         ]()[38](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=37)[         'or contact us directly on slack.'
         ]()[39](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=38)[     )
    ---> ]()[40](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=39)[     return func(*args, **kwargs)
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py:402, in SQLChain.to_dbt(self, output_directory, file_name, config_args, include_schema)
        ]()[394](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=393)[         chn_logger.warning(
        ]()[395](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=394)[             'Unexpected error generating the schema of this SQLChain. '
        ]()[396](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=395)[             'Your model.sql file will be generated without a schema.yml file. '
       (...)
        ]()[399](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=398)[             'your_chn.save() to update the view definition in your Data Warehouse.'
        ]()[400](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=399)[         )
        ]()[401](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=400)[     schema = []
    --> ]()[402](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=401)[ return create_dbt_files(
        ]()[403](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=402)[     self.transforms,
        ]()[404](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=403)[     schema,
        ]()[405](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=404)[     output_directory,
        ]()[406](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=405)[     file_name,
        ]()[407](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=406)[     config_args,
        ]()[408](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=407)[     include_schema
        ]()[409](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=408)[ )
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:105, in create_dbt_files(transforms, schema, output_directory, file_name, config_args, include_schema)
        ]()[102](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=101)[ output_directory = output_directory or os.getcwd()
        ]()[103](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=102)[ file_name = file_name or f'{transforms[-1].output_alias}.sql'
        ]()[104](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=103)[ return save_model_file(
    --> ]()[105](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=104)[     sql_definition=assemble_cte_chain(transforms),
        ]()[106](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=105)[     output_directory=output_directory,
        ]()[107](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=106)[     file_name=file_name,
        ]()[108](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=107)[     config_args=config_args,
        ]()[109](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=108)[     include_schema=include_schema,
        ]()[110](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=109)[     schema=schema
        ]()[111](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=110)[ )
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:36, in assemble_cte_chain(transforms, table_type)
         ]()[34](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=33)[     t = transforms[0]
         ]()[35](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=34)[     create_stmt = _set_create_statement(table_type, t.fqtn)
    ---> ]()[36](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=35)[     final_select = generate_transform_sql(
         ]()[37](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=36)[         t.name,
         ]()[38](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=37)[         t.arguments,
         ]()[39](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=38)[         t.source_table,
         ]()[40](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=39)[         None,
         ]()[41](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=40)[         t._dw
         ]()[42](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=41)[     )
         ]()[43](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=42)[     return create_stmt + final_select
         ]()[45](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=44)[ # Handle multi-transform chains
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:124, in generate_transform_sql(name, arguments, source_table, running_sql, dw)
        ]()[120](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=119)[ """
        ]()[121](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=120)[ Returns the SQL for a Transform with applied arguments
        ]()[122](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=121)[ """
        ]()[123](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=122)[ templates = rtx.serve_rasgo_transform_templates(dw.dw_type)
    --> ]()[124](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=123)[ udt: 'TransformTemplate' = [t for t in templates if t.name == name][0]
        ]()[125](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=124)[ if not udt:
        ]()[126](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=125)[     raise TransformRenderingError(f'Cannot find a transform named {name}')
    
    IndexError: list index out of range]()
    
    bug 
    opened by amirbtb 9
  • Support upstream snowflake connector

    Support upstream snowflake connector

    Is your feature request related to a problem? Please describe. Need more of the snowflake connection options that are defined here https://github.com/snowflakedb/snowflake-connector-python/blob/main/src/snowflake/connector/connection.py#L112

    Describe the solution you'd like The ability to directly use the snowflake connector, or all of its options

    enhancement 
    opened by pbarker 6
  • `ds.transform(name='cast',casts=cast_dict)` creates duplicate columns  | BigQuery

    `ds.transform(name='cast',casts=cast_dict)` creates duplicate columns | BigQuery

    Hi,

    The preview of ds.transform(name='cast', casts=cast_dict) shows a dataset with both old and new columns (casted). I gave a look at cast.sql and I see that it starts with a SELECT *. Suggestion : I believe the ds.transform(name='cast', casts=cast_dict) should be able to cast the provided columns, while keeping the other ones.

    Thank you 🙏

    enhancement 
    opened by amirbtb 6
  • Support fetching batches from Snowflake

    Support fetching batches from Snowflake

    Is your feature request related to a problem? Please describe. Hey, love the tool, I am loading large datasets that won't fit into memory

    Describe the solution you'd like Would like to use https://docs.snowflake.com/en/user-guide/python-connector-api.html#fetch_pandas_batches

    enhancement 
    opened by pbarker 5
  • `to_dbt()` creates a view in the schema of the source table  | BigQuery

    `to_dbt()` creates a view in the schema of the source table | BigQuery

    Hi,

    After I run to_dbt(), I noticed that a view is created in the BigQuery schema where the source table (normal table) is located. The view has the same name as the .sql file created in the output path that I provided in to_dbt() and its query is similar to the content of the .sql file outputted by to_dbt() . The ability to quickly create a view based on all the transformations performed via RasgoQL is very useful but I'm not sure if it should be a default output of to_dbt().

    Again, thank you for your work ! 🙏🏽

    bug 
    opened by amirbtb 5
  • Trouble with import

    Trouble with import

    Describe the bug I get this error when I try to pip install rasgoql[snowflake]: no matches found: rasgoql[snowflake]

    Prior to running this, I successfully downloaded the snowflake connector...

    To Reproduce go to bash and type: pip install rasgoql[snowflake]

    Expected Behavior: successful import

    Actual Behavior: bash returns: no matches found: rasgoql[snowflake]

    Version Information (please complete the following information): rasgoql==1.1.1 rasgotransforms==1.1.3

    Additional context I'm trying to connect to a trial snowflake account

    opened by mashhype 3
  • `ds.concat`doesn't accept the `name` argument | BigQuery

    `ds.concat`doesn't accept the `name` argument | BigQuery

    Hello,

    I tried to use the ds.conct() method as shown in the Example of the documentation. ds.concat(concat_list=['first_column',"'-'",'second_column'], name="both_columns") returns the following error :

    [File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py:54, in TransformableClass._create_aliased_function.<locals>.f(*arg, **kwargs)
         ]()[53](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=52)[ def f(*arg, **kwargs) -> 'SQLChain':
    ---> ]()[54](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=53)[     return self.transform(name=transform.name, *arg, **kwargs)
    
    TypeError: transform() got multiple values for keyword argument 'name']()
    

    When I don't provide the name argument, the functions works.

    Thanks again !

    bug 
    opened by amirbtb 3
  • New DB Request | BigQuery

    New DB Request | BigQuery

    Would like to use RasgoQL with BigQuery

    Open questions:

    • will transform templates need changes to support BigQuery SQL syntax?
    • may need to support google oauth login since creds are typically tired to a google account
    enhancement 
    opened by cpdough 3
  • Nest Transform Arguments

    Nest Transform Arguments

    This PR allows a transform to accept a Dataset or SQLChain as an input argument. The new logic flattens the primitive to either a fqtn or a CTE wrapped in parentheses and nests it in the running CTE. I don't know why this works, but 10 Budweisers can't be wrong. JK! It was 11.

    opened by griffatrasgo 1
  • #47 RAS-2651 Adding Amazon Redshift

    #47 RAS-2651 Adding Amazon Redshift

    Adding Amazon Redshift support.

    Test file attached here _test_demo_redshift.zip

    Need the following environment variables:

    REDSHIFT_USER="<dbuser>"
    REDSHIFT_PASSWORD="<dbpass>"
    REDSHIFT_DATABASE="dev"
    REDSHIFT_SCHEMA="public"
    REDSHIFT_HOST="<cluster-host>"
    REDSHIFT_PORT=5439
    REDSHIFT_DB_USER="<dbuser>"
    
    opened by ChrisGriffithRASGO 1
  • Document connecting to data warehouses using dictionary args

    Document connecting to data warehouses using dictionary args

    What feature are you requesting?

    There aren't any docs on connecting to a data warehouse using a dictionary. The current credential classes are limited and gave the impression I wouldn't be able to connect to my warehouse

    Are you using a workaround to do it in or outside of the product today?

    Read the code and figured it out

    How important is this feature to your continued use of the package? Can you qualify the value / importance of this feature in any way?

    I think this is pretty important since it gave me the impression I couldn't use this product

    enhancement 
    opened by pbarker 2
Releases(1.6.4)
  • 1.6.4(Jul 5, 2022)

    Version 1.6.4 - 2022-07-05

    Changed

    • Changed default behavior of to_dbt function. Instead of always appending model details to the schema.yml file (which creates duplicate entries for existing models), rql will now check if a model entry already exists in the file and overwrite it. If the model does not exist, it will be appended.
    Source code(tar.gz)
    Source code(zip)
  • 1.6.3(Jun 28, 2022)

  • 1.6.2(Jun 27, 2022)

  • 1.6.1(Jun 27, 2022)

    Version 1.6.1 - 2022-06-27

    Fixed

    • Fixed a bug in the get_schema method of SQLAlchemy DW classes where users were being asked to enter an overwrite param they cannot access
    Source code(tar.gz)
    Source code(zip)
  • 1.6.0(Jun 23, 2022)

    Version 1.6.0 - 2022-06-23

    Changed

    • Changed the get_schema method on all DW classes to accept a single fqtn_or_sql variable
    • Changed the behavior of transform arguments: when a Dataset or SQLChain class is passed in as an argument to a transform, it is automatically flattened to its corresponding fqtn or CTE then consumed in the transform.
    Source code(tar.gz)
    Source code(zip)
  • 1.5.6(Jun 21, 2022)

    Version 1.5.6 - 2022-06-20

    Changed

    • Changed the get_schema method on Snowflake and BigQuery DW classes to get output columns without creating views
    Source code(tar.gz)
    Source code(zip)
  • 1.5.5(Jun 7, 2022)

Project coded in Python using Pandas to look at changes in chase% for batters facing a pitcher first time through the order vs. thrid time

Project coded in Python using Pandas to look at changes in chase% for batters facing a pitcher first time through the order vs. thrid time

Jason Kraynak 1 Jan 07, 2022
Learn Basic to advanced level Data visualisation techniques from this Repository

Data visualisation Hey, You can learn Basic to advanced level Data visualisation techniques from this Repository. Data visualization is the graphic re

Shashank dwivedi 16 Jan 03, 2023
Data visualization using matplotlib

Data visualization using matplotlib project instructions Top 5 Most Common Coffee Origins In this visualization I used data from Ankur Chavda on Kaggl

13 Oct 27, 2021
PanGraphViewer -- show panenome graph in an easy way

PanGraphViewer -- show panenome graph in an easy way Table of Contents Versions and dependences Desktop-based panGraphViewer Library installation for

16 Dec 17, 2022
D-Analyst : High Performance Visualization Tool

D-Analyst : High Performance Visualization Tool D-Analyst is a high performance data visualization built with python and based on OpenGL. It allows to

4 Apr 14, 2022
A small collection of tools made by me, that you can use to visualize atomic orbitals in both 2D and 3D in different aspects.

Orbitals in Python A small collection of tools made by me, that you can use to visualize atomic orbitals in both 2D and 3D in different aspects, and o

Prakrisht Dahiya 1 Nov 25, 2021
Plot toolbox based on Matplotlib, simple and elegant.

Elegant-Plot Plot toolbox based on Matplotlib, simple and elegant. 绘制效果 绘制过程 数据准备 每种图标类型的目录下有data.csv文件,依据样例数据填入自己的数据。

3 Jul 15, 2022
patchwork for matplotlib

patchworklib patchwork for matplotlib test code Preparation of example plots import seaborn as sns import numpy as np import pandas as pd #Bri

Mori Hideto 185 Jan 06, 2023
Dimensionality reduction in very large datasets using Siamese Networks

ivis Implementation of the ivis algorithm as described in the paper Structure-preserving visualisation of high dimensional single-cell datasets. Ivis

beringresearch 284 Jan 01, 2023
WhatsApp Chat Analyzer is a WebApp and it can be used by anyone to analyze their chat. 😄

WhatsApp-Chat-Analyzer You can view the working project here. WhatsApp chat Analyzer is a WebApp where anyone either tech or non-tech person can analy

Prem Chandra Singh 26 Nov 02, 2022
python partial dependence plot toolbox

PDPbox python partial dependence plot toolbox Motivation This repository is inspired by ICEbox. The goal is to visualize the impact of certain feature

Li Jiangchun 723 Jan 07, 2023
This is a sorting visualizer made with Tkinter.

Sorting-Visualizer This is a sorting visualizer made with Tkinter. Make sure you've installed tkinter in your system to use this visualizer pip instal

Vishal Choubey 7 Jul 06, 2022
Import, visualize, and analyze SpiderFoot OSINT data in Neo4j, a graph database

SpiderFoot Neo4j Tools Import, visualize, and analyze SpiderFoot OSINT data in Neo4j, a graph database Step 1: Installation NOTE: This installs the sf

Black Lantern Security 42 Dec 26, 2022
Graphical display tools, to help students debug their class implementations in the Carcassonne family of projects

carcassonne_tools Graphical display tools, to help students debug their class implementations in the Carcassonne family of projects NOTE NOTE NOTE The

1 Nov 08, 2021
阴阳师后台全平台(使用网易 MuMu 模拟器)辅助。支持御魂,觉醒,御灵,结界突破,秘闻副本,地域鬼王。

阴阳师后台全平台辅助 Python 版本:Python 3.8.3 模拟器:网易 MuMu | 雷电模拟器 模拟器分辨率:1024*576 显卡渲染模式:兼容(OpenGL) 兼容 Windows 系统和 MacOS 系统 思路: 利用 adb 截图后,使用 opencv 找图找色,模拟点击。使用

简讯 27 Jul 09, 2022
finds grocery stores and stuff next to route (gpx)

Route-Report Route report is a command-line utility that can be used to locate points-of-interest near your planned route (gpx). The results are based

Clemens Mosig 5 Oct 10, 2022
Visualize and compare datasets, target values and associations, with one line of code.

In-depth EDA (target analysis, comparison, feature analysis, correlation) in two lines of code! Sweetviz is an open-source Python library that generat

Francois Bertrand 2.3k Jan 05, 2023
A Bokeh project developed for learning and teaching Bokeh interactive plotting!

Bokeh-Python-Visualization A Bokeh project developed for learning and teaching Bokeh interactive plotting! See my medium blog posts about making bokeh

Will Koehrsen 350 Dec 05, 2022
Declarative statistical visualization library for Python

Altair http://altair-viz.github.io Altair is a declarative statistical visualization library for Python. With Altair, you can spend more time understa

Altair 8k Jan 05, 2023
This is my favourite function - the Rastrigin function.

This is my favourite function - the Rastrigin function. What sparked my curiosity and interest in the function was its complexity in terms of many local optimum points, which makes it particularly in

1 Dec 27, 2021