The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
Use Seaborn to visualize interpret the byte layout of Solana account types

solana-account-vis Use Seaborn to visually interpret the byte layout of Solana account types Usage from account_visualization import generate_account_

Jarry Xiao 15 Aug 25, 2022
An example of using discordpy 2.0.0a to create a bot that supports slash commands

DpySlashBotExample An example of using discordpy 2.0.0a to create a bot that supports slash commands. This is not a fully complete bot, just an exampl

7 Oct 17, 2022
Python-based Snapchat score booster using pyautogui module

Snapchat Snapscore Botter Python-based Snapchat score booster using pyautogui module. Click here to report bugs. Usage Download ZIP here and extract t

477 Dec 31, 2022
Update your World of Warcraft AddOns hosted on GitHub

AddOns Update Tool Tool to update World of Warcraft AddOns hosted on GitHub Features Pure Python: only Dulwich and Colorlog Multithreaded tasks Manual

Mr. Alchemist 16 Dec 06, 2022
This Telegram bot allows you to create direct links with pre-filled text to WhatsApp Chats

WhatsApp API Bot Telegram bot to create direct links with pre-filled text for WhatsApp Chats You can check our bot here. The bot is based on the API p

RobotTrick • רובוטריק 17 Aug 20, 2022
The Dolby.io Developer Days Getting Started with Media APIs Workshop repo.

Dolby.io Developer Days Media APIs Getting Started Application About this Workshop and Application This example is designed to get participants workin

Dolby.io Samples 2 Nov 03, 2022
A Python Jupyter Kernel in Slack. Just send Python code as a message.

Slack IPython bot 🤯 One Slack bot to rule them all. PyBot. Just send Python code as a message. Install pip install slack-ipython To start the bot, si

Rick Lamers 44 May 23, 2022
A Django-style ORM idea for manipulating Google Datastore entities

No SeiQueLa ORM EM DESENVOLVIMENTO Uma ideia de ORM no estilo do Django para manipular entidades do Google Datastore. Montando seu modelo: from noseiq

Geraldo Castro 16 Nov 01, 2022
A multi-purpose Discord bot with simple moderation commands, reaction roles, reminders, and much more!

Nokari This is the rewrite of Nokari. There are still a lot of things to be done. I'm still working on the internal logic, so the bot basically has no

Norizon 13 Nov 17, 2022
A python package to easy the integration with Direct Online Pay (Mpesa, TigoPesa, AirtelMoney, Card Payments)

A python package to easy the integration with Direct Online Pay (DPO) which easily allow you easily integrate with payment options once without having to deal with each of them individually;

Jordan Kalebu 2 Nov 25, 2021
Python bindings for BigML.io

BigML Python Bindings BigML makes machine learning easy by taking care of the details required to add data-driven decisions and predictive power to yo

BigML Inc, Machine Learning made easy 271 Dec 27, 2022
BiliBili-live-barrage-transceiver - A simple python program for sending and receiving barrage in bilibili live room

BiliBili-live-barrage-transceiver - A simple python program for sending and receiving barrage in bilibili live room

zeroy 2 Jan 18, 2022
trackbranch is a tool for developers that can be used to store collections of branches in the form of profiles.

trackbranch trackbranch is a tool for developers that can be used to store collections of branches in the form of profiles. This can be useful for sit

Kevin Morris 1 Oct 21, 2021
Infrastructure template and Jupyter notebooks for running RoseTTAFold on AWS Batch.

AWS RoseTTAFold Infrastructure template and Jupyter notebooks for running RoseTTAFold on AWS Batch. Overview Proteins are large biomolecules that play

AWS Samples 20 May 10, 2022
An Anime Themed Fast And Safe Group Managing Bot.

Ξ L I N Λ 👸 A Powerful, Smart And Simple Group Manager bot Avaiilable a latest version as Ξ L I N Λ 👸 on Telegram Self-hosting (For Devs) vps # Inst

7 Nov 12, 2022
A very simple Salesforce.com REST API client for Python

Simple Salesforce Simple Salesforce is a basic Salesforce.com REST API client built for Python 3.5, 3.6, 3.7 and 3.8. The goal is to provide a very lo

simple salesforce 1.4k Dec 29, 2022
Bender: A Markov Babbler Slack Bot

See the Digital Ocean tutorial for instructions on how to get the basic bot structure in place. Once you have that, set the gunicorn command to run as

Andrew Howard 1 Dec 04, 2021
Asynchronous wrapper for wttr.in weather forecast.

aiopywttr Asynchronous wrapper for wttr.in weather forecast. Synchronous version here. Installation pip install aiopywttr Example This example prints

Almaz 4 Dec 24, 2022
Um bot para contar quantas vezes o meu amigo troca de pfp/nick/tag essas coisas ae pq aquele mlk n para quieto

EkiBot Um bot que tem apenas as suas funções de audit log com as PFP's (avatares) dos usuários Pode ser usado para um usuário em específico, ou até me

Samuel 3 Aug 11, 2021
This Is A Python Program To Showcase Two Modules (Gratient And Fade)

Hellooo, It's PndaBoi Here! This Is A Python Program To Showcase Two Modules (Gratient And Fade). I Really Like Both Of These Modules So I Decided To

PndaBoi! 6 May 31, 2022