Config files for my GitHub profile.

Overview

Canalyst Candas Data Science Library

Name

Canalyst Candas

Description

Built by a former PM / analyst to give anyone with a little bit of Python knowledge the ability to scale their investment process. Access, manipulate, and visualize Canalyst models, without opening Excel. Work with full fundamental models, create and calculate scenarios, and visualize actionable investment ideas.

Hosted collaborative Jupyterhub server available at Candas Cloud

  • Rather than simply deliver data, Candas serves the actual model in a Python class. Like a calculator, this allows for custom scenario evaluation for one or more companies at a time.
  • Use Candas to search for KPIs by partial or full description, filter by “key driver” – model driver, sector, category, or query against values for a screener-like functionality. Search either our full model dataset or our guidance dataset for companies which provide guidance.
  • Discover the KPIs with the greatest impact on stock price, and evaluate those KPIs based on changing P&L scenarios.
  • Visualize P&L statements in node trees with common size % and values attached. Use the built-in charting tools to efficiently make comparisons.

In short, a data science library using Canalyst's API, developed for securities analysis using Python.

  • Search KPI
  • Company data Dataframes (one company or many)
  • Charts
  • Model update (scenario analysis)
  • Visualize formula builds

Installation

Installation instructions can be found on our PyPI page

Usage

Search Guidance:

Candas is built to facilitate easy discovery of guidance in our Modelverse. You can search guidance for key items, either filtered by a ticker / ticker list or just across the entire Modelverse.

Guidance Example:

canalyst_search.search_guidance_time_series(ticker = "", #any ticker or list of tickers 
                sector="Consumer", #path in our nomenclature is a hierarchy of sectors
                file_name="", #file name is a proxy for company name
                time_series_name="", #our range name
                time_series_description="china", #human readable row header
                most_recent=True) #most recent item or all items 

Search KPI:

Candas is also built to facilitate easy discovery of KPI names in our Modelverse.

KPI Search Example:

canalyst_search.search_time_series(ticker = "",
                 sector="Thrifts",
                 category="",
                 unit_type="percentage",
                 mo_only=True,
                 period_duration_type='fiscal_quarter',
                 time_series_name='',
                 time_series_description='total revenue growth', #guessing on the time series name
                 query = 'value > 5')

ModelSet:

The core objects in Candas are Models. Models can be arranged in a set by instantiating a ModelFrame. Instantiate a config object to handle authentication.

model_set = cd.ModelSet(ticker_list=[ticker_list],config=config)

With modelset, the model_frame attribute returns Pandas dataframes. The parameters for model_frame():

  • time_series_name: Send in a partial string as time series name, model_frame will regex search for it
  • pivot: Pivot allows for excel-model style wide data (good for comp screens)
  • mrq: True / False filters to ONLY the most recent quarter
  • period_duration_type: is fiscal_quarter or fiscal_year or blank for both
  • is_historical: True will filter to only historical, False only forecasts, or blank for both
  • n_periods: defaults to 12 but most of our models go back to 2013
  • mrq_notation: applies to pivot, and will filter to historical data and apply MRQ-n notation on the columns (a way to handle off fiscal reporters in comp screens)

Example:

model_set.model_frame(time_series_name="MO_RIS_REV",
                  is_driver="",
                  pivot=False,
                  mrq=False,
                  period_duration_type='fiscal_quarter', #or fiscal_year
                  is_historical="",
                  n_periods=12,
                  mrq_notation=False)
`

Charting:

Candas has a Canalyst standard charting library which allows for easy visualizations.

Chart Example: Chart

df_plot = df[df['ticker'].isin(['AZUL US','MESA US'])][['ticker','period_name','value']].pivot_table(values="value", index=["period_name"],columns=["ticker"]).reset_index()
p = cd.Chart(df_plot['period_name'],df_plot[["AZUL US", "MESA US"]],["AZUL US", "MESA US"], [["Periods", "Actual"]], title="MO_MA_Fuel")
p.show()

Scenario Analysis:

Candas can arrange a forecast and send it to our scenario engine via the fit() function, and get changed outputs vs the default.

Example:

return_series = "MO_RIS_EPS_WAD_Adj"
list_output = []
for ts in time_series_names:
    df_params = model_set.forecast_frame(ts,
                             n_periods=-1,
                             function_name='multiply',
                             function_value=(1.1))
    dicts_output=model_set.fit(df_params,return_series)
    for key in dicts_output.keys():
        list_output.append(dicts_output[key].head(1))

ModelMap:

Candas can show a node tree at any level of the PNL

Example:

model_set.create_model_map(ticker=ticker,time_series_name="MO_RIS_REV",col_for_labels = "time_series_description").show() #launches in a separate browser window

ModelMap and Scenario Engine Together: ModelMap example: Node Chart for Fuel Margin Fuel Margin

KPI Importance / Scenario Engine:

Use the same node tree to extract key drivers, then use our scenario engine to rank order 1% changes in KPI driver vs subsequent revenue change

Example:

#use the same node tree to extract key drivers (red nodes in the map)
df = model_set.models[ticker].key_driver_map("MO_RIS_REV")
return_series = 'MO_RIS_REV'
driver_list_df = []
for i, row in df.iterrows():

    time_series_name = row['time_series_name']
    print(f"scenario: move {time_series_name} 1% and get resultant change in {return_series}")

    #create a param dataframe for each time series name in our list
    df_1_param = model_set.forecast_frame(time_series_name,
                         n_periods=-1,
                         function_name='multiply',
                         function_value=1.01)


    d_output=model_set.fit(df_1_param,return_series) #our fit function will return a link to scenario engine JSON for audit

    df_output = model_set.filter_summary(d_output,period_type='Q')

    df_merge = pd.merge(df_output,df_1_param,how='inner',left_on=['ticker','period_name'],right_on=['ticker','period_name'])

    driver_list_df.append(df_merge) #append to a list for concatenating at the end
df = pd.concat(driver_list_df).sort_values('diff',ascending=False)[['ticker','time_series_name_y','diff']]
df = df.rename(columns={'time_series_name_y':'time_series_name'})
df['diff'] = df['diff']-1
df = df.sort_values('diff')
df.plot(x='time_series_name',y='diff',kind='barh',title=ticker+" Key Drivers Revenue Sensitivity")

KPI Rank

Support

[email protected]

Contributing

Project is currently only open to contributors through discussion with the maintainer.

Authors and acknowledgment

[email protected]

License

APL 2.0

Project status

Ongoing

Waymo motion prediction challenge 2021: 3rd place solution

Waymo motion prediction challenge 2021: 3rd place solution 📜 Technical report 🗨️ Presentation 🎉 Announcement 🛆Motion Prediction Channel Website 🛆

158 Jan 08, 2023
Source code for our paper "Do Not Trust Prediction Scores for Membership Inference Attacks"

Do Not Trust Prediction Scores for Membership Inference Attacks Abstract: Membership inference attacks (MIAs) aim to determine whether a specific samp

<a href=[email protected]"> 3 Oct 25, 2022
pyspark🍒🥭 is delicious,just eat it!😋😋

如何用10天吃掉pyspark? 🔥 🔥 《10天吃掉那只pyspark》 🚀

lyhue1991 578 Dec 30, 2022
Official page of Patchwork (RA-L'21 w/ IROS'21)

Patchwork Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor

Hyungtae Lim 254 Jan 05, 2023
The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

ycj_project 1 Jan 18, 2022
Python-kafka-reset-consumergroup-offset-example - Python Kafka reset consumergroup offset example

Python Kafka reset consumergroup offset example This is a simple example of how

Willi Carlsen 1 Feb 16, 2022
This is the first released system towards complex meters` detection and recognition, which is implemented by computer vision techniques.

A three-stage detection and recognition pipeline of complex meters in wild This is the first released system towards detection and recognition of comp

Yan Shu 19 Nov 28, 2022
Pytorch implementation of the paper: "A Unified Framework for Separating Superimposed Images", in CVPR 2020.

Deep Adversarial Decomposition PDF | Supp | 1min-DemoVideo Pytorch implementation of the paper: "Deep Adversarial Decomposition: A Unified Framework f

Zhengxia Zou 72 Dec 18, 2022
MEDS: Enhancing Memory Error Detection for Large-Scale Applications

MEDS: Enhancing Memory Error Detection for Large-Scale Applications Prerequisites cmake and clang Build MEDS supporting compiler $ make Build Using Do

Secomp Lab at Purdue University 34 Dec 14, 2022
This is the official implementation for "Do Transformers Really Perform Bad for Graph Representation?".

Graphormer By Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng*, Guolin Ke, Di He*, Yanming Shen and Tie-Yan Liu. This repo is the official impl

Microsoft 1.3k Dec 29, 2022
Repo for "Physion: Evaluating Physical Prediction from Vision in Humans and Machines" submission to NeurIPS 2021 (Datasets & Benchmarks track)

Physion: Evaluating Physical Prediction from Vision in Humans and Machines This repo contains code and data to reproduce the results in our paper, Phy

Cognitive Tools Lab 38 Jan 06, 2023
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intenti

NVIDIA Corporation 6.9k Jan 03, 2023
Official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels".

WarPI The official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels". Run python main.py --corruption_type

Haoliang Sun 3 Sep 03, 2022
This library provides an abstraction to perform Model Versioning using Weight & Biases.

Description This library provides an abstraction to perform Model Versioning using Weight & Biases. Features Version a new trained model Promote a mod

Hector Lopez Almazan 2 Jan 28, 2022
YuNetのPythonでのONNX、TensorFlow-Lite推論サンプル

YuNet-ONNX-TFLite-Sample YuNetのPythonでのONNX、TensorFlow-Lite推論サンプルです。 TensorFlow-LiteモデルはPINTO0309/PINTO_model_zoo/144_YuNetのものを使用しています。 Requirement Op

KazuhitoTakahashi 8 Nov 17, 2021
Benchmark library for high-dimensional HPO of black-box models based on Weighted Lasso regression

LassoBench LassoBench is a library for high-dimensional hyperparameter optimization benchmarks based on Weighted Lasso regression. Note: LassoBench is

Kenan Šehić 5 Mar 15, 2022
Chainer Implementation of Semantic Segmentation using Adversarial Networks

Semantic Segmentation using Adversarial Networks Requirements Chainer (1.23.0) Differences Use of FCN-VGG16 instead of Dilated8 as Segmentor. Caution

Taiki Oyama 99 Jun 28, 2022
PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation

PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation The paper: https://arxiv.org/abs/1704.03296 What makes

Jacob Gildenblat 322 Dec 17, 2022
Let's Git - Versionsverwaltung & Open Source Hausaufgabe

Let's Git - Versionsverwaltung & Open Source Hausaufgabe Herzlich Willkommen zu dieser Hausaufgabe für unseren MOOC: Let's Git! Wir hoffen, dass Du vi

1 Dec 13, 2021
Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet.

Ravens is a collection of simulated tasks in PyBullet for learning vision-based robotic manipulation, with emphasis on pick and place. It features a Gym-like API with 10 tabletop rearrangement tasks,

Google Research 367 Jan 09, 2023