CS50 pset9: Using flask API to create a web application to exchange stocks' shares.

Overview

C$50 Finance

In this guide we want to implement a website via which users can “register”, “login” “buy” and “sell” stocks, like below:

Picture of dashboard

Background

If you’re not quite sure what it means to buy and sell stocks (i.e., shares of a company), head here for a tutorial.

We’re about to implement C$50 Finance, a web app via which you can manage portfolios of stocks. Not only will this tool allow us to check real stocks’ actual prices and portfolios’ values, it will also let you buy and sell stocks by querying IEX for stocks’ prices.

Indeed, IEX lets you download stock quotes via their API (application programming interface) using URLs like https://cloud.iexapis.com/stable/stock/nflx/quote?token=API_KEY.

Before getting started on this project, we’ll need to register for an API key in order to be able to query IEX’s data. To do so, follow these steps:

  • Visit iexcloud.io/cloud-login#/register/.
  • Select the “Individual” account type, then enter your email address and a password, and click “Create account”.
  • Once registered, scroll down to “Get started for free” and click “Select Start” to choose the free plan.
  • Once you’ve confirmed your account via a confirmation email, visit (https://iexcloud.io/console/tokens).
  • Copy the key that appears under the Token column (it should begin with pk_).
  • In a terminal window execute:
export API_KEY=value

where value is that (pasted) value, without any space immediately before or after the =. You also may wish to paste that value in a text document somewhere, in case you need it again later.

Install requirements

This guide wrote for Windows Terminal and if you have another OS you may change it.

Before we start, you should clone this GitHub repo and then install the dependencies.

git clone https://github.com/magnooj/CS50-finance.git
cd CS50-fincance
pip install -r requirements.txt

Through the files

Now, we are ready to run and test our project. By running ls you can see these files:

Flask API

The first step in building APIs is to think about the data we want to handle, how we want to handle it and what output we want with our APIs. In our example, we want users can register, log in, log out and buy, sell and qout stocks; Finally, see the history of their transactions.

The main HTML file in our app is layout.html. We created a template that other HTML files cand extend that.

In this example, we create Flask eight routs so that we can serve HTTP traffic on that route.

  • / or index : Is the homepage of our app. If user loged in, it display the user’s current cash balance along with a grand total (i.e., stocks’ total value plus cash). But, if user didn.t log in, it displays the login page.
  • register : It has a form that user can register by filling it.
  • buy : In this route, users can input a stock’s symbol and buy some shares.
  • sell : In this page, users can SELECT from theis stocks’ symbol and sell their shares.
  • qoute : Users can lookup the price each share in a stock’s symbol.
  • history : It displays an HTML table summarizing all of a user’s transactions ever, listing row by row each and every buy and every sell.
  • login and logout : These routes start and terminate user’s session.

Of course there is some files like apology.html that displays the error to the user. You can check other files.

Now, We cheked our files and sqw how our app is working. To run the app, when you are in CS50-finance directory, enter this command in the terminal:

flask run

I hope you enjoyed how to stocks' exchange web application using flask. if you have any comments please do not hesitate to send me an e-mail.

Regards,

Ali Ganjizadeh

Exploratory Data Analysis for Employee Retention Dataset

Exploratory Data Analysis for Employee Retention Dataset Employee turn-over is a very costly problem for companies. The cost of replacing an employee

kana sudheer reddy 2 Oct 01, 2021
A forecasting system dedicated to smart city data

smart-city-predictions System prognostyczny dedykowany dla danych inteligentnych miast Praca inżynierska realizowana przez Michała Stawikowskiego and

Kevin Lai 1 Nov 08, 2021
Data pipelines built with polars

valves Warning: the project is very much work in progress. Valves is a collection of functions for your data .pipe()-lines. This project aimes to host

14 Jan 03, 2023
Intake is a lightweight package for finding, investigating, loading and disseminating data.

Intake: A general interface for loading data Intake is a lightweight set of tools for loading and sharing data in data science projects. Intake helps

Intake 851 Jan 01, 2023
Python Library for learning (Structure and Parameter) and inference (Statistical and Causal) in Bayesian Networks.

pgmpy pgmpy is a python library for working with Probabilistic Graphical Models. Documentation and list of algorithms supported is at our official sit

pgmpy 2.2k Dec 25, 2022
pipeline for migrating lichess data into postgresql

How Long Does It Take Ordinary People To "Get Good" At Chess? TL;DR: According to 5.5 years of data from 2.3 million players and 450 million games, mo

Joseph Wong 182 Nov 11, 2022
simple way to build the declarative and destributed data pipelines with python

unipipeline simple way to build the declarative and distributed data pipelines. Why you should use it Declarative strict config Scaffolding Fully type

aliaksandr-master 0 Jan 26, 2022
Bamboolib - a GUI for pandas DataFrames

Community repository of bamboolib bamboolib is joining forces with Databricks. For more information, please read our announcement. Please note that th

Tobias Krabel 863 Jan 08, 2023
Udacity - Data Analyst Nanodegree - Project 4 - Wrangle and Analyze Data

WeRateDogs Twitter Data from 2015 to 2017 Udacity - Data Analyst Nanodegree - Project 4 - Wrangle and Analyze Data Table of Contents Introduction Proj

Keenan Cooper 1 Jan 12, 2022
bigdata_analyse 大数据分析项目

bigdata_analyse 大数据分析项目 wish 采用不同的技术栈,通过对不同行业的数据集进行分析,期望达到以下目标: 了解不同领域的业务分析指标 深化数据处理、数据分析、数据可视化能力 增加大数据批处理、流处理的实践经验 增加数据挖掘的实践经验

Way 2.4k Dec 30, 2022
PyIOmica (pyiomica) is a Python package for omics analyses.

PyIOmica (pyiomica) This repository contains PyIOmica, a Python package that provides bioinformatics utilities for analyzing (dynamic) omics datasets.

G. Mias Lab 13 Jun 29, 2022
This repo contains a simple but effective tool made using python which can be used for quality control in statistical approach.

📈 Statistical Quality Control 📉 This repo contains a simple but effective tool made using python which can be used for quality control in statistica

SasiVatsal 8 Oct 18, 2022
Feature engineering and machine learning: together at last

Feature engineering and machine learning: together at last! Lambdo is a workflow engine which significantly simplifies data analysis by unifying featu

Alexandr Savinov 14 Sep 15, 2022
📊 Python Flask game that consolidates data from Nasdaq, allowing the user to practice buying and selling stocks.

Web Trader Web Trader is a trading website that consolidates data from Nasdaq, allowing the user to search up the ticker symbol and price of any stock

Paulina Khew 21 Aug 30, 2022
A Numba-based two-point correlation function calculator using a grid decomposition

A Numba-based two-point correlation function (2PCF) calculator using a grid decomposition. Like Corrfunc, but written in Numba, with simplicity and hackability in mind.

Lehman Garrison 3 Aug 24, 2022
Data imputations library to preprocess datasets with missing data

Impyute is a library of missing data imputation algorithms. This library was designed to be super lightweight, here's a sneak peak at what impyute can do.

Elton Law 329 Dec 05, 2022
Spectral Analysis in Python

SPECTRUM : Spectral Analysis in Python contributions: Please join https://github.com/cokelaer/spectrum contributors: https://github.com/cokelaer/spect

Thomas Cokelaer 280 Dec 16, 2022
EOD Historical Data Python Library (Unofficial)

EOD Historical Data Python Library (Unofficial) https://eodhistoricaldata.com Installation python3 -m pip install eodhistoricaldata Note Demo API key

Michael Whittle 20 Dec 22, 2022
An implementation of the largeVis algorithm for visualizing large, high-dimensional datasets, for R

largeVis This is an implementation of the largeVis algorithm described in (https://arxiv.org/abs/1602.00370). It also incorporates: A very fast algori

336 May 25, 2022
For making Tagtog annotation into csv dataset

tagtog_relation_extraction for making Tagtog annotation into csv dataset How to Use On Tagtog 1. Go to Project Downloads 2. Download all documents,

hyeong 4 Dec 28, 2021