The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
Unofficial Coinbase Python Library

Unofficial Coinbase Python Library Python Library for the Coinbase API for use with three legged oAuth2 and classic API key usage Version 0.3.0 Requir

George Sibble 104 Dec 01, 2022
Cryptocurrency Trading Bot - A trading bot to automate cryptocurrency trading strategies using Python, equipped with a basic GUI

Cryptocurrency Trading Bot - A trading bot to automate cryptocurrency trading strategies using Python, equipped with a basic GUI. Used REST and WebSocket API to connect to two of the most popular cry

Francis 8 Sep 15, 2022
Stock market bot that will be used to learn about API calls and database connections.

Stock market bot that will be used to learn about API calls and database connections.

1 Dec 24, 2021
A Pluggable And Powerful Telegram Manager Bot

FRIDAY-USERBOT 🇮🇳 A Powerful, Smart And Simple Userbot In Pyrogram. Support 🚑 Inspiration & Credits Userge-X Userge Pokurt Pyrogram Code Owners Chs

DevsExpo 371 Mar 26, 2022
This is a TG Video Compress BoT. Product by BINARY Tech

🌀 Video Compressor Bot Product by BINARY Tech Deploy to Heroku The Hard Way virtualenv -p python3 VENV . ./VENV/bin/activate pip install -r requireme

1 Jan 04, 2022
Pythonic wrapper for the Aladhan prayer times API.

aladhan.py is a pythonic wrapper for the Aladhan prayer times API. Installation Python 3.6 or higher is required. To Install aladhan.py with pip: pip

HETHAT 8 Aug 17, 2022
allow windows programs to call dssp/mkdssp command from wsl; rework biopython on windows (PDB -> dssp -> fasta)

dssp-wsl Converting PDB (Protein Data Bank) file format to DSSP file format is required for generating datasets of peptides and their secondary struct

Taine Zhao 1 Feb 23, 2022
Bot interpretation of the carbon.now.sh site

📒 Source code of the @PicodeBot 🧸 Developer: @hoosnick Run $ git clone https://github.com/hoosnick/picodebot.git $ pip install -r requirements.txt P

Husniddin Murodov 13 Oct 02, 2022
Youtube Music Playlist Organizer

Youtube Music Playlist Organizer, a simple Python application that uses ytmusicapi to help user edit their playlists and organize in other playlists.

Bedir Tapkan 1 Oct 24, 2021
Elkeid HUB - A rule/event processing engine maintained by the Elkeid Team that supports streaming/offline data processing

Elkeid HUB - A rule/event processing engine maintained by the Elkeid Team that supports streaming/offline data processing

Bytedance Inc. 61 Dec 29, 2022
A simple url uploader bot with permenent thumbnail support

URL-Uploader A simple url uploader bot with permenent thumbnail support Scrapped some code from @SpEcHIDe's AnyDLBot Repository Please fork this repos

Fayas Noushad 40 Nov 29, 2021
Utilizing the freqtrade high-frequency cryptocurrency trading framework to build and optimize trading strategies. The bot runs nonstop on a Rasberry Pi.

Freqtrade Strategy Repository Please test all scripts and dry run them before using them in live mode Contact me on discord if you have any questions!

Michael Fourie 90 Jan 01, 2023
Financial portfolio optimisation in python, including classical efficient frontier, Black-Litterman, Hierarchical Risk Parity

PyPortfolioOpt has recently been published in the Journal of Open Source Software 🎉 PyPortfolioOpt is a library that implements portfolio optimizatio

Robert Martin 3.2k Jan 02, 2023
Assassination API for getting random quotes from Assassination Classroom.

Assassination API Take advantage of what you have, while you have it. Quotes from Assassination Classroom Assassination classroom is one of best anime

Swanand Mulay 3 Jul 15, 2022
Buy early bsc gems with custom gas fee, slippage, amount. Auto approve token after buy

Buy early bsc gems with custom gas fee, slippage, amount. Auto approve token after buy. Sell buyed token with custom gas fee, slippage, amount. And more.

Jesus Crypto 206 May 01, 2022
Python SDK for 42DI

42di Python SDK Install pip install git+https://github.com/42di/python-sdk import import di #42di import pandas_datareader as pdr Init SDK project =

42DI 2 Nov 03, 2021
A wrapper for The Movie Database API v3 and v4 that only uses the read access token (not api key).

fulltmdb A wrapper for The Movie Database API v3 and v4 that only uses the read access token (not api key). Installation Use the package manager pip t

Jacob Hale 2 Sep 26, 2021
Easy to use phishing tool with 63 website templates. Author is not responsible for any misuse.

PyPhisher [+] Created By KasRoudra [+] Description : Ultimate phishing tool in python. Includes popular websites like facebook, twitter, instagram, gi

KasRoudra 1.1k Jan 01, 2023
This is a small Messnger with the cmd as an interface

Messenger This is a small messenger with the cmd as an interface. It started as a project to learn more about Python 3. If you want to run a version o

1 Feb 24, 2022
A Discord token grabber written in Python3, with awesome obfuscation and anti-debug protection.

☣️ Plague ☣️ Plague is a Discord token grabber written in Python3, obfuscated with Kramer, protected from traffic analysers with Scarecrow and using t

Billy 125 Dec 20, 2022