The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
Riffdog Terraform scanner - finding 'things' in the Real World (aka AWS) which Terraform didn't put there.

riffdog Riffdog Terraform / Reality scanner - finding 'things' in the Real World which Terraform didn't put there. This project works by firstly loadi

Riffdog 4 Mar 23, 2020
OpenVisionAPI client

OpenVisionAPI Client 🚀 Getting Started Prerequisites Installing Install the dependencies $ make setup Usage $ source .venv/bin/activate $ ./ova_clie

Open Vision API 40 Nov 11, 2022
A combination between python-flask, that fetch and send data from league client during champion select thanks to LCU

A combination between python-flask, that fetch data and send from league client during champion select thanks to LCU and compare picked champs to the gamesDataBase that we need to collect using my ot

Anas Hamrouni 1 Jan 19, 2022
☄️ High performance, easy to use and feature-rich Solana SDK for Python.

Solathon is an high performance, easy to use and feature-rich Solana SDK for Python. Easy for beginners, powerful for real world applications.

Bolt 28 Oct 10, 2022
📢 Video Chat Stream Telegram Bot. Can ⏳ Stream Live Videos, Radios, YouTube Videos & Telegram Video Files On Your Video Chat Of Channels & Groups !

Telegram Video Chat Bot (Beta) 📢 Video Chat Stream Telegram Bot 🤖 Can Stream Live Videos, Radios, YouTube Videos & Telegram Video Files On Your Vide

brut✘⁶⁹ // ユスフ 15 Dec 24, 2022
Prisma Cloud utility scripts, and a Python SDK for Prisma Cloud APIs.

pcs-toolbox Prisma Cloud utility scripts, and a Python SDK for Prisma Cloud APIs. Table of Contents Support Setup Configuration Script Usage CSPM Scri

Palo Alto Networks 34 Dec 15, 2022
Wechat-file-cleaner - Clean files in PC WeChat FileStorage directory

Wechat-file-cleaner - Clean files in PC WeChat FileStorage directory

Xingjian Zhang 1 Feb 06, 2022
Simple Craigslist wrapper

python-craigslist A simple Craigslist wrapper. License: MIT-Zero. Disclaimer I don't work for or have any affiliation with Craigslist. This module was

Julio M. Alegria 370 Dec 22, 2022
TON Miner from TON-Pool.com

TON-Pool Miner Miner from TON-Pool.com

21 Nov 18, 2022
Flask-SQLAlchemy API for daisuki-web

💟 Anime Daisuki! API API de animes com cadastro de usuários. O usuário autenticado pode avaliar e favoritar animes, comentar episódios e verificar o

Paulo Thor 1 Nov 04, 2021
SUPPORTS 500 GROUPS NO NEED OF BOT 😉

LOVELY RADIO SUPPORTS 500 GROUPS NO NEED OF BOT 😉 Requirements Telegram API_ID , API_HASH and SESSION_NAME HEROKU Get YouTube live stream link instal

6 Nov 24, 2021
Forward Propagation, Backward Regression and Pose Association for Hand Tracking in the Wild (CVPR 2022)

HandLer This repository contains the code and data for the following paper: Forward Propagation, Backward Regression, and Pose Association for Hand Tr

<a href=[email protected]"> 17 Oct 02, 2022
Wrapper for Gismeteo.ru.

pygismeteo Обёртка для Gismeteo.ru. Асинхронная версия здесь. Установка python -m pip install -U pygismeteo Документация https://pygismeteo.readthedoc

Almaz 7 Dec 26, 2022
A file-based quote bot written in Python

Let's Write a Python Quote Bot! This repository will get you started with building a quote bot in Python. It's meant to be used along with the Learnin

Jyoti prakash Rout 1 Jan 08, 2022
Programa capaz de gerar QR Code a partir do link inserido.

QrCodePy Programa capaz de gerar QR Code, a partir do link inserido, em forma de imagem e salvar localmente. Exemplo de saída: Requisitos Pure Python

Jonas Carvalho 4 Sep 09, 2021
Frwdit-V1 - A Simple Bot can copy any media to a private channel provided

📠 Auto Forward V2 A Simple Bot can copy any media to a private channel provided

FUCK OFF ANE 3 Dec 03, 2022
A qq bot based on nonebot2 and go-cqhttp

Asoul-bot A qq bot based on nonebot and go-cqhttp 你可以将bot部署在本地,也可以加入bot测试群:784280070(全体禁言) 你可以通过临时会话的方式向bot发送指令,输入help获取帮助菜单 本地部署请参考:https://zhuanlan.

11 Sep 23, 2022
A simple test repo created following docker docs.

docker_sampleRepo A simple test repo created following docker docs. Link to docs: https://docs.docker.com/language/python/develop/ Other links: https:

Suraj Verma 2 Sep 16, 2022
Plugin for Sentry which allows sending notification via Telegram messenger.

Sentry Telegram Plugin for Sentry which allows sending notification via Telegram messenger. Presented plugin tested with Sentry from 8.9 to 9.1.1. DIS

Shmele 208 Dec 30, 2022
Telegram Bot to store Posts and Documents and it can Access by Special Links.

Telegram Bot to store Posts and Documents and it can Access by Special Links. I Guess This Will Be Usefull For Many People..... 😇 . Features Fully cu

REX BOTZ 1 Dec 23, 2021