The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
Requests based multi-threaded script for increasing followers on Spotify

Proxyless Spotify Follow Bot Requests based multi-threaded script for increasing followers on Spotify. Click here to report bugs. Usage Download ZIP h

397 Jan 03, 2023
un outil pour bypasser les code d'états HTTP négatif coté client ( 4xx )

4xxBypasser un outil pour bypasser les code d'états HTTP négatif coté client ( 4xx ) Liscence : MIT license Creator Installation : git clone https://g

21 Dec 25, 2022
A multi-platform HTTP(S) Reverse Shell Server and Client in Python 3

Phantom - A multi-platform HTTP(S) Reverse Shell Server and Client Phantom is a multi-platform HTTP(S) Reverse Shell server and client in Python 3. Bi

85 Nov 18, 2022
A telegram bot that can send you high-quality audio 🎧🎧🎧

Music downloader bot Still under development Please Report issues to improve this repo.I will try to fix bugs in next update Music downloader bot is a

Anish Gowda 36 Dec 06, 2022
自动每天给女友发邮件

github acitons 发邮件 python 脚本 每天 7点半左右给女朋友发送邮件 天气来自: http://www.tianqiapi.com/ 文字图片来源:http://wufazhuce.com/ 风景图:https://qqlykm.cn/api/fengjing 土味情话:htt

gogobody 7 May 12, 2022
This is a tool to help people to make a bot for labelling images for machine learning projects.

labeller_images_python_telegramBOT This is a bot to help collect data for any machine learning project. It was developed using the python-telegram-bot

Diego Silveira 2 Nov 13, 2021
A file-based quote bot written in Python

Let's Write a Python Quote Bot! This repository will get you started with building a quote bot in Python. It's meant to be used along with the Learnin

0 Jan 20, 2022
Watches your earnings on EarnApp and notifies you when you earned balance or received an payout.

EarnApp-Earning-Monitor Watches your earnings on EarnApp and notifies you when you earned balance or received an payout. Installation Install Python3

Yariya 21 Oct 17, 2022
An API wrapper for Discord written in Python.

disnake A modern, easy to use, feature-rich, and async ready API wrapper for Discord written in Python. About disnake All the contributors and develop

557 Jan 05, 2023
Twitter-redesign - Twitter Redesign With Django

Twitter Redesign A project that tests Django and React knowledge through a twitt

Mark Jumba 1 Jun 01, 2022
A collection of tools for managing Jira issues for the RHODS project

RHODS-Jira-Tools A collection of tools for managing Jira issues for the RHODS project move_to_qa.py This script handles transitioning a given Jira iss

Alex Corvin 1 Sep 20, 2022
Python interface to the LinkedIn API

Python LinkedIn Python interface to the LinkedIn API This library provides a pure Python interface to the LinkedIn Profile, Group, Company, Jobs, Sear

ozgur 844 Dec 27, 2022
A discord bot consuming Notion API to add, retrieve data to Notion databases.

Notion-DiscordBot A discord bot consuming Notion API to add and retrieve data from Notion databases. Instructions to use the bot: Pre-Requisites: a)In

Servatom 57 Dec 29, 2022
Anti Spam/NSFW Telegram Bot Written In Python With Pyrogram.

✨ SpamProtectionRobot ✨ Anti Spam/NSFW Telegram Bot Written In Python With Pyrogram. Requirements Python = 3.7 Install Locally Or On A VPS $ git clon

Akshay Rajput 46 Dec 13, 2022
The implementation of Learning Instance and Task-Aware Dynamic Kernels for Few Shot Learning

INSTA: Learning Instance and Task-Aware Dynamic Kernels for Few Shot Learning This repository provides the implementation and demo of Learning Instanc

11 Jan 02, 2023
Pydf: A modular Telegram Bot which provides Pdf Tools using PyPdf2

pyDF-Bot 🌍 Pydf - Pyrogram Document File Bot, a modular Telegram Bot which prov

HyDrix 2 Feb 18, 2022
Tsar-Bot - Crypto auto trade bot that use sentiment analysis from twitter

Tsar Bot - Crypto Sentiment Bot Tsar Bot is a Twitter Crypto Sentiment Bot that

Hilmi Azizi 26 Dec 15, 2022
SelfBot, a lots of features: Mass DM , Nuke, Raid... and more!

SelfBot, a lots of features: Mass DM , Nuke, Raid... and more!

eryycr 2 Nov 10, 2021
Git Plan - a better workflow for git

git plan A better workflow for git. Git plan inverts the git workflow so that you can write your commit message first, before you start writing code.

Rory Byrne 178 Dec 11, 2022
Declarative assertions for AWS

AWSsert AWSsert is a Python library providing declarative assertions about AWS resources to your tests. Installation Use the package manager pip to in

19 Jan 04, 2022