The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
Prisma Cloud utility scripts, and a Python SDK for Prisma Cloud APIs.

pcs-toolbox Prisma Cloud utility scripts, and a Python SDK for Prisma Cloud APIs. Table of Contents Support Setup Configuration Script Usage CSPM Scri

Palo Alto Networks 34 Dec 15, 2022
A Powerful telegram giveawayz bot based on the python-telegram-bot API

GiveawayZ Bot A Powerful telegram giveawayz bot based on the python-telegram-bot API. Powered by Team Zyntax and Team DFX Developed by @Zycho-Dev A pr

Zycho #AFK 5 Jul 31, 2022
Auto-updater for the Northstar Titanfall 2 client

northstar-updater Auto-updater for the Northstar Titanfall 2 client Usage Put the exe into your Titanfall 2 directory next to Titanfall2.exe Then, whe

7 Nov 25, 2022
Simple integration between FastAPI and cloud authentication services (AWS Cognito, Auth0, Firebase Authentication).

FastAPI Cloud Auth fastapi-cloudauth standardizes and simplifies the integration between FastAPI and cloud authentication services (AWS Cognito, Auth0

tokusumi 255 Jan 07, 2023
A discord bot for tracking Iranian Minecraft servers and showing the statistics of them

A discord bot for tracking Iranian Minecraft servers and showing the statistics of them

MCTracker 20 Dec 30, 2022
May or may not be work๐Ÿšถ

AnyDLBot There are multiple things I can do: ๐Ÿ‘‰ All Supported Video Formats of https://rg3.github.io/youtube-dl/supportedsites.html ๐Ÿ‘‰ Upload as file

Arun 2 Nov 16, 2021
Account Profiles Dumper for Fortnite.

Fortnite Profile Dumper This program allows you to dump your Fortnite account profiles. How to use it? After starting the FortniteProfileDumper.py, yo

PRO100KatYT 12 Jul 28, 2022
Python package for Calendly API v2

PyCalendly Python package to use Calendly API-v2. Installation Install with pip $ pip install PyCalendly Usage Getting Started See Getting Started wi

Lakshmanan Meiyappan 20 Dec 05, 2022
Exports saved posts and comments on Reddit to a csv file.

reddit-saved-to-csv Exports saved posts and comments on Reddit to a csv file. Columns: ID, Name, Subreddit, Type, URL, NoSFW ID: Starts from 1 and inc

70 Jan 02, 2023
fhempy is a FHEM binding to write modules in Python language

fhempy (BETA) fhempy allows the usage of Python 3 (NOT 2!) language to write FHEM modules. Python 3.7 or higher is required, therefore I recommend usi

Dominik 27 Dec 14, 2022
A plugin for modmail-bot for stealing,making ,etc emojis

EmojiPlugin for the Modmail-bot My first plugin .. its very Basic I will make more and better too Only 3 commands for now emojiadd-supports .jpg, .png

1 Dec 28, 2021
Grape - A webbrowser with its own Search Engine

Grape ๐Ÿ”Ž A Web Browser made entirely in python. Search Engine ๐Ÿ”Ž Installation: F

Grape 2 Sep 06, 2022
A simple chat api that can also work with ipb4 and chatbox+

SimpleChatApi API for chatting that can work on its own or work with Invision Community and Chatbox+. You are also welcome to create frontend for this

Anubhav K. 1 Feb 01, 2022
Python based Algo trading bot for Nifty / Banknifty futures and options

Fully automated Alice Blue Algo Trading with Python on NSE and MCX for Nifty / Crude / Banknifty futures and options , absolutely FREE ! This algo tra

Rajesh Sivadasan 49 Dec 31, 2022
A Fork of Gitlab's Permifrost tool for managing Snowflake Permissions

permifrost-fork This is a fork of the GitLab permifrost project. As the GitLab team is not currently maintaining the project, we've taken on maintenac

Hightouch 7 Oct 13, 2021
A discord bot for checking what linked profiles a user has to their Ubisoft account

ubisoft_discord_profiles A Discord bot for checking what linked profiles a user has to their Ubisoft account. This can be setup using an enviromental

Andrei 1 Dec 17, 2021
C Y B ฮž R UserBot is a project that simplifies the use of Telegram. All rights reserved.

C Y B ฮž R USฮžRBOT ๐Ÿ‡ฆ๐Ÿ‡ฟ C Y B ฮž R UserBot is a project that simplifies the use of Telegram. All rights reserved. Automatic Setup Android: open Termux p

C Y B ฮž R 0 Sep 20, 2022
Telegram bot to extract text from image

OCR Bot @Image_To_Text_OCR_Bot A star โญ from you means a lot to us! Telegram bot to extract text from image Usage Deploy to Heroku Tap on above button

Stark Bots 25 Nov 24, 2022
A telegram bot to track whales activities on multiple blockchains.

Telegram Bot : Whale Watcher A straightforward telegram bot written in python to track whales activity on multiple blockchains, using whale-alert API

Laurenz Bougan 1 Dec 10, 2021
Georeferencing large amounts of data for free.

Geolocate Georeferencing large amounts of data for free. Special thanks to @brunodepauloalmeida and the whole team for the contributions. How? It's us

Gabriel Gazola Milan 23 Dec 30, 2022