The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
Autofill HZDR Zeitman entries

Zeitman_autofill Filling out Zeitman is boring. This script might make some of the pain go away. Requirements The selenium package and Chrome webdrive

Tim Callow 8 Mar 14, 2022
Cities bot - A simple example of using aiogram and the wikipedia package

Cities game A simple example of using aiogram and the wikipedia package. The bot

Artem Meller 2 Jan 29, 2022
A simple discord bot named atticus that sends you the timetable of your classes upon request

A simple discord bot named atticus that sends you the timetable of your classes upon request. Soon, it would you ping you before classes too!

Samhitha 3 Oct 13, 2022
Cryptocurrency Prices Telegram Bot For Python

Cryptocurrency Prices Telegram Bot How to Run Set your telegram bot token as environment variable TELEGRAM_BOT_TOKEN: export TELEGRAM_BOT_TOKEN=your_

Sina Nazem 3 Oct 31, 2022
Stock trading bot made using the Robinhood API / Python library...

High-Low Stock trading bot made using the Robinhood API / Python library... Index Installation Use Development Notes Installation To Install and run t

Reed Graff 1 Jan 07, 2022
The worst but simplest webhook bot for GitHub and Matrix.

gh-bot gh-bot is maybe the worst (but simplest) Matrix webhook bot for Github. Example of commits: Example of workflow finished: Setting up Server You

Jae Lo Presti 4 Aug 18, 2022
A quick-and-dirty script to scrape the daily menu of Leipzig University Mensa and send it to a telegram channel.

Feed me Mensa UL A quick-and-dirty script to scrape the daily menu of Leipzig University Mensa and send it to a telegram channel. For food and cat lov

3 Apr 08, 2022
Most Simple & Powefull web3 Trade Bot (WINDOWS LINUX) Suport BSC ETH

Most Simple & Powefull Trade Bot (WINDOWS LINUX) What Are Some Pros And Cons Of Owning A Sniper Bot? While having a sniper bot is typically an advanta

GUI BOT 4 Jan 25, 2022
Robocord is a bot created for the Pycord community.

Robocord is a bot created for the community of the Pycord Server. Just a bot created for Pycord Server. You can start pull requests, I will check it and if its good I will add it to the bot. 👍

Bruce 7 Jun 26, 2022
A collection of scripts to steal BTC from Lightning Network enabled custodial services. Only for educational purpose! Share your findings only when design flaws are fixed.

Lightning Network Fee Siphoning Attack LN-fee-siphoning is a collection of scripts to subtract BTC from Lightning Network enabled custodial services b

Reckless_Satoshi 14 Oct 15, 2022
NiceHash Python Library and Command Line Rest API

NiceHash Python Library and Command Line Rest API Requirements / Modules pip install requests Required data and where to get it Following data is nee

Ashlin Darius Govindasamy 2 Jan 02, 2022
telegram bot that calculates file hash / Dosya toplamı hesaplayan telegram botu

Telegram File Hash Bot FileHashBot: 🇬🇧 Bot that calculates file hashes. 🇹🇷 Dosya toplamları hesaplayan bot. Demo in Telegram: @FileHashBot 🇬🇧 Se

Hüzünlü Artemis [HuzunluArtemis] 5 Jun 29, 2022
A powerfull SMS Bomber for Bangladesh . NO limite .Unlimited SMS Spaming

RedBomberBD A powerfull SMS Bomber for Bangladesh . NO limite .Unlimited SMS Spaming Installation Install my-tool on termux by using thoes commands pk

Abdullah Al Redwan 3 Feb 16, 2022
Python based Discord Bot with a simple music player

C32 Discord Bot Discord bot that plays music Table Of Contents About the Project Built With Acknowledgements About The Project Play music using the !p

Christopher Burwell 2 Oct 17, 2021
A Telegram Bot to display Codeforces Contest Ranklist

CFRankListBot A bot that displays the top ranks for a Codeforces contest. Participants' Details All the details of a participant is in the utils/__ini

Code IIEST 5 Dec 25, 2021
A client interface for Scrapinghub's API

Client interface for Scrapinghub API The scrapinghub is a Python library for communicating with the Scrapinghub API. Requirements Python 2.7 or above

Scrapinghub 184 Sep 28, 2022
python3.5+ hubspot client based on hapipy, but modified to use the newer endpoints and non-legacy python

A python wrapper around HubSpot's APIs, for python 3.5+. Built initially around hapipy, but heavily modified. Check out the documentation here! (thank

Jacobi Petrucciani 140 Dec 21, 2022
Download apps and remove icloud

Download apps and remove icloud

Data portal client and server for NMDC.

NMDC Server and Client Portal Getting started with Docker install ldc install submodules via git submodule update --init --recursive In order to popul

National Microbiome Data Collaborative 7 Dec 14, 2022
veez music bot is a telegram music bot project, allow you to play music on voice chat group telegram.

🎶 VEEZ MUSIC BOT Veez Music is a telegram bot project that's allow you to play music on telegram voice chat group. Requirements 📝 FFmpeg NodeJS node

levina 143 Jun 19, 2022