Subscrape - A Python scraper for substrate chains

Related tags

Web Crawlingsubscrape
Overview

subscrape

A Python scraper for substrate chains that uses Subscan.

Usage

  • copy config/sample_scrape_config.json to config/scrape_config.json and configure to your desire.
  • make sure there is a data/parachains folder
  • run
  • corresponding files will be created in data/

If a file already exists in data/, that operation will be skipped in subsequent runs.

Configuration

To query extrinsics from Substrate chains, only the module and call is needed.

To query transactions from Moonbeam chains, the contract address and the hex-formatted method id is needed. This can be found by scrolling to the input of a transaction on moonbeam and copying the method id. Example: https://moonriver.moonscan.io/tx/0x35c7d7bdd33c77c756e7a9b41b587a6b6193b5a94956d2c8e4ea77af1df2b9c3

Architecture

On overview is given in this Twitter thread: https://twitter.com/alice_und_bob/status/1493714489014956037

General

We use the following methods in the projects:

SubscanWrapper

There is a class SubscanWrapper that encapsulates the logic around calling Subscan. API: https://docs.api.subscan.io/ If you have a Subscan API key, you can put it in the main folder in a file called "subscan-key" and it will be applied to your calls.

ParachainScraper

This is a scraoer that knows how to use the SubscanWrapper to fetch data for a parachain and serialize it to disk.

Currently it knows how to fetch addresses and extrinsics.

Comments
  • Introduce SubscanV2 API

    Introduce SubscanV2 API

    This PR introduces a SubscanV2 API Wrapper as a configurable option. In a future version of Subscrape, this should become the default version.

    What's new?

    • You can now configure the API which shall be used for scraping by setting the _api param in the configuration within a chain.
    • SubscrapeDB is rewritten to store hydrated extrinsics and events in a dedicated Sqlite file. Previous storage are being redefined as indexes (or summary elements)
    • New operations extrinsics-list and events-list were added. You can submit lists of extrinsic or event indices and receive hydrated elements.
    • events and extrinsics scraping: module can now be set to None to scrape all extrinsics or events. name can now be set to None to scrape all calls from the module.
    • The improved paging mechanism of SubscanV2 is used
    • the 'transfers' operation was fixed

    This changes the output format of the results. Any code that uses the new scraper might break.

    Configuring the new SubscanV2 API

    config = {
            "kusama":{
                "_api": "SubscanV2",
                "extrinsics":{
                    "crowdloan": ["create"]
                },
                "extrinsics-list":[
                    "14238250-2"
                ],
                "events":{
                    "crowdloan": ["created"]
                }
                "events-list":[
                    "14238250-39"
                ],
                "transfers": [
                    "Dp27W3mGpkT7BG9SxNsTtWoKcvJSwzF4BU8tadYirkn6Kwx"
                ]
            },
        }
    

    For now, SubscanV1 is still the default version and is used automatically if _api is not given. Though at some point in the future, the default my change to SubscanV2 and at some point support for SubscanV1 might be discontinued.

    What changed

    SubscanWrapper was refactored to be SubscanBase, from which SubscanV1 and SubscanV2 now inherit. Most logic is in SubscanBase for now. The specific classes only overwrite params atm but in the future I suspect specific method implementations will follow (paging changes in V2)

    A bit of the logic from ParachainScraper had to move into SubscanBase, namely the fetch functions that parameterized the call. I wanted to avoid code duplication or ugly forks in code.

    • [x] Updated Documentation + sample config
    • [x] Updated Tests
    opened by Tomen 1
  • MOVR liquidity provisioning

    MOVR liquidity provisioning

    • refactor to break apart large re-usable MOVR methods.
    • nearly complete support of scraping liquidity provisioning (but no tax analysis yet)
    • export the MOVR transactions to an .xlsx spreadsheet (using pandas)
    opened by spazcoin 1
  • Add new `extrinsics-list` scrape operation. Refactor SubscrapeDB. Add Unit Tests and Github Automations

    Add new `extrinsics-list` scrape operation. Refactor SubscrapeDB. Add Unit Tests and Github Automations

    New Feature

    This PR adds a new operation extrinsics-list to scrape extrinsics for a list of extrinsic indexes.

    config = {
        "kusama":{
            "extrinsics-list":{
                ["14238250-2"]
            }
        }
    }
    

    To showcase the usage, bin/sample_extrinsic_list.py has been added.

    Changes to SubscrapeDB

    To read and write single extrinsics to and from the DB the write_extrinsic and read_extrinsic methods have been added.

    This highlighted the weakness of the current design in SubscrapeDB that extrinsics are assumed to be known my module/call. To work around the issue, a new meta index was introduced. SectorizedStorageManager is replaced with a sqlitedict key-value store wrapper, which itself is a wrapper on top of Sqlite. This is only the first of a few optimization steps I see, but it is non-breaking so far.

    Addition of Unit Tests and Github Automations

    Since this has a huge impact on the storage, I decided to start adding unit tests. I also added Github CI Automations to automatically run the unit tests upon a PR

    opened by Tomen 1
  • export EVM/MOVR transactions to spreadsheet

    export EVM/MOVR transactions to spreadsheet

    Either as a built-in functionality or as an example for the user, add script capability to export decoded MOVR/GLMR transactions into a CSV or XLS file format to be imported into Excel. Extra decoded data like exact input/output quantities and token identifiers should also be exported.

    This feature would not yet try to line up transactions to perform Ins/Outs transaction analysis. This feature would not yet try to format the CSV/XLS export data to match an existing format to be imported into other programs like Rotki.

    Estimate: 3 hours

    enhancement 
    opened by spazcoin 1
  • finish EVM/MOVR token swap decoding

    finish EVM/MOVR token swap decoding

    Subscrape can now decode contract interaction transactions using the contract's ABI. For transactions like token swaps where both input and output tokens are specified, subscrape can now decode those (including event logs). Howevever, swapETHForTokens still needs to be decoded/interpreted because it doesn't specify an input quantity so that's likely because it's using native tokens. For this task, complete the analysis and implementation for those specific methods so that DEX token swap decoding is complete.

    Estimate: 2 hours.

    enhancement 
    opened by spazcoin 1
  • new MOVR/GLMR operation:

    new MOVR/GLMR operation: "account_transactions"

    Description

    • support new operation "account_transactions" for retrieving all transactions for an account. Structured so that I think support for filters and skipping can be added later.
    • refactor moonbeam_scraper's "fetch_transactions" so that it can be used to retrieve transactions both for accounts and entire contract method (by pulling processor definition out).
    • a little PEP8 cleanup and reStructured text docstrings for methods

    Testing

    • to make sure your "transactions" operation still works, I ran subscrape.py using your example config file and it chugged through Solarbeam transactions. Looks like about 7935 unique addresses, so hopefully that matches your previous results.

    Future

    • I'd suggest renaming your "transactions" operation to something like contract_transactions.
    • I'd love to use your filtering module on moonriver/moonbeam, but it looks like you've only integrated it with ParachainScraper so far.
    opened by spazcoin 1
  • SubscrapeDB V2 - transition to full SQLite

    SubscrapeDB V2 - transition to full SQLite

    Changes:

    • Introducing Extrinsic and Event classes with properties as delivered by Subscan.
    • SubscrapeDB completely overhauled to use full SQLite capabilities instead of just having key-value pairs.
    • _db_connection_string: New config param on the chain level that allows you to define a connection string for SQLAlchemy to use.
    • _auto_hydrate: New config param that defines if scraping extrinsics and events shall automatically hydrate them.
    • removed SubscanV1 and the _api config param
    • _stop_at_known_data: New config param that defines if the scraper should stop scraping metadata when hitting a known element.
    • SubscanWrapper's fetch_extrinsic() and fetch_event() now updates any already existing item in the DB. You can explicitely override this by setting the optional update_existing param to False
    opened by Tomen 0
  • real life use cases explained

    real life use cases explained

    Add explanations or how-to-use our scripts and example config files to perform several of the operations that we wrote subscrape to accomplish ourselves.

    contributes to #22

    opened by spazcoin 0
  • shim `setup.py` so editable install possible

    shim `setup.py` so editable install possible

    I've previously used setup.py but noticed you were using a pyproject.toml instead so I thought I'd learn about the differences. I found this SO post describing how you could use a shim setup.py to still allow editable installs of the subscrape package. (I didn't even know editable installs were possible before!) Anyways, I've added in that shim to give us maximal utility in the future. https://stackoverflow.com/questions/62983756/what-is-pyproject-toml-file-for

    contributes to #20

    opened by spazcoin 0
  • Feature/sectorized storage

    Feature/sectorized storage

    This pull request abstracts the sectorized storage and retrieval of extrinsics into a dedicated class. This is the first step toward storing types other than extrinsics with the same logic.

    The pull request has been tested with scrape.py

    opened by Tomen 0
  • generically decode DEX swap transactions, including event log decoding

    generically decode DEX swap transactions, including event log decoding

    Description:

    This PR uses @yifei_huang's scripts to decode contract input data as well interpret a transaction log file (to figure out the result of a contract interaction, not just the inputs). See article explanation: https://towardsdatascience.com/decoding-ethereum-smart-contract-data-eed513a65f76

    • retrieve the ABIs for each contract interface, instead of archiving copies of each one. This allows us to support all APIs without needing to archive each one.
    • Get token name, symbol, and calculate real transaction values. This this required adding support for the Blockscout API also.
    • moonbeam_scraper.py is now able to decode contract input data, and then retrieve transaction receipts for contract interactions so that it can decode the internal dex 'trace' transactions to determine the exact input and output quantities for swap transactions. Previously, we were using the internal 'Swap' transactions, but those were too difficult to match up for a multi-hop swap (i.e. ROME -> MOVR -> SOLAR) and a few implementations were confusing with multiple Swap inputs. So instead the code now only sums up the Transfer input quantities from the source acct. This is done in a generic way without requiring classes for each DEX contract implementation which works as long as DEX contracts all follow general patterns and naming conventions.
    • decode errors for DPS contracts were due to @yifei_huang's initial convert_to_hex functionality not being able to handle more complex data structures. When working down through nested structures, it missed buried byte arrays and didn't convert them to hex. json.dumps(decoded_func_params) was throwing an exception because the bytes weren't JSON serializable. The solution was to look for lists embedded inside a tuple/struct.
    • decode errors for 0xTaylor's puzzle contract were intentionally caused by poor utf8 string handling. Therefore still print out a warning the first time that contract is encountered and can't be decoded. Include full traceback and diagnostic messages instead of swallowing them.
    • add rate limiting to our block explorer API calls, to make sure we always get a valid response from the APIs. Adapt the rate limit for subscan.io depending on whether an API key is provided.
    • support moonscan.io api keys
    • use httpx for GET and POST instead of requests library. Queries to the moonscan.io API started failing. Using Wireshark I tracked it down to use of the "requests" python package which doesn't support HTTPS2. It was submitting the moonscan.io GET request with HTTP/1.1 and then moonscan.io responds with HTTP/1.1 403 Forbidden and the cloudflare error message. Whereas if I hand-crafted the same GET request as a URL string, it would be accepted and I received JSON back with the transaction history. Wireshark showed that these handcrafted URLs were being submitted as HTTP2. The solution was to migrate GET and POST HTTP messages to use the httpx package instead of requests. (and with pip you must install the httpx optional support for http2) I went ahead and changed subscan.io calls to use httpx also, but with minimal testing.
    • update docs to explain how moonbeam_scraper works now, how to interpret its output data, what typical warnings/errors are.
    opened by spazcoin 0
  • DB path should be configurable

    DB path should be configurable

    In case many different processes are performed from the same integration, DB size might grow to a point that execution time is suffering from overlapping data.

    Making the DB path configurable can mitigate this issue.

    enhancement 
    opened by Tomen 0
  • Transfers scraping might complete prematurely

    Transfers scraping might complete prematurely

    If a transfer (or even a balances.transfer event) is queried from two different addresses, the later one might result in reporting an existing entry in the DB which leads the scrape to complete prematurely.

    bug 
    opened by Tomen 0
  • extract ABI for unverified EVM contracts

    extract ABI for unverified EVM contracts

    Currently, we only parse contract interactions for verified contracts since they have published ABIs to help us interpret the call data. However, there's a new tool to guess the ABI from an unverified contract. https://mobile.twitter.com/w1nt3r_eth/status/1575848008985370624

    Get to it!

    opened by spazcoin 0
  • scrape EVM/MOVR token transfers

    scrape EVM/MOVR token transfers

    We can already decode contract interactions for EVM DEX token swaps. However, what about simply sending ERC-20 from one account to another? (or even XC-20 tokens) This task is to investigate what those operations look like on-chain and add a script to decode them.

    Estimate: 4 hours

    enhancement 
    opened by spazcoin 0
  • Scrape EVM/MOVR liquidity provisioning DEX transactions

    Scrape EVM/MOVR liquidity provisioning DEX transactions

    decode generic liquidity provisioning transaction types for EVM DEXes after decoding the transaction input data using the DEX contract's ABI.

    estimate: 8 hours

    enhancement 
    opened by spazcoin 0
Releases(v1.0)
  • v1.0(Mar 21, 2022)

    This version introduces the first stable version of Subscrape. It is able to scrape extrinsics from parachains and store them locally in SubscrapeDB to then query and transform them as needed.

    Full Changelog: https://github.com/ChaosDAO-org/subscrape/commits/v1.0

    Source code(tar.gz)
    Source code(zip)
Owner
ChaosDAO
ChaosDAO
Ebay Webscraper for Getting Average Product Price

Ebay-Webscraper-for-Getting-Average-Product-Price The code in this repo is used to determine the average price of an item on Ebay given a valid search

17 Jan 05, 2023
An application that on a given url, crowls a web page and gets all words, sorts and counts them.

Web-Scrapping-1 An application that on a given url, crowls a web page and gets all words, sorts and counts them. Installation Using the package manage

adriano atambo 1 Jan 16, 2022
Audio media crawler for lbry.

Audio media crawler for lbry. Requirements Python 3.8 Poetry 1.1.7 Elasticsearch 7.14.0 Lbry-sdk 0.99.0 Development This project uses poetry as a depe

Hound.fm 4 Dec 03, 2022
UdemyBot - A Simple Udemy Free Courses Scrapper

UdemyBot - A Simple Udemy Free Courses Scrapper

Gautam Kumar 112 Nov 12, 2022
Pythonic Crawling / Scraping Framework based on Non Blocking I/O operations.

Pythonic Crawling / Scraping Framework Built on Eventlet Features High Speed WebCrawler built on Eventlet. Supports relational databases engines like

Juan Manuel Garcia 173 Dec 05, 2022
This app will let you continuously scrape certain parts of LeasePlan and extract data of cars becoming available for lease.

LeasePlan - Scraper This app will let you continuously scrape certain parts of LeasePlan and extract data of cars becoming available for lease. It has

Rodney 4 Nov 18, 2022
Automated Linkedin bot that will improve your visibility and increase your network.

LinkedinSpider LinkedinSpider is a small project using browser automating to increase your visibility and network of connections on Linkedin. DISCLAIM

Frederik 2 Nov 26, 2021
A crawler of doubamovie

豆瓣电影 A crawler of doubamovie 一个小小的入门级scrapy框架的应用,选取豆瓣电影对排行榜前1000的电影数据进行爬取。 spider.py start_requests方法为scrapy的方法,我们对它进行重写。 def start_requests(self):

Cats without dried fish 1 Oct 05, 2021
TikTok Username Swapper/Claimer/etc

TikTok-Turbo TikTok Username Swapper/Claimer/etc I wanted to create it as fast as possible but i eventually gave up and recoded it many many many many

Kevin 12 Dec 19, 2022
HappyScrapper - Google news web scrapper with python

HappyScrapper ~ Google news web scrapper INSTALLATION ♦ Clone the repository ♦ O

Jhon Aguiar 0 Nov 07, 2022
python+selenium实现的web端自动打卡 + 每日邮件发送 + 金山词霸 每日一句 + 毒鸡汤(从2月份稳定运行至今)

python+selenium实现的web端自动打卡 说明 本打卡脚本适用于郑州大学健康打卡,其他web端打卡也可借鉴学习。(自己用的,从2月分稳定运行至今) 仅供学习交流使用,请勿依赖。开发者对使用本脚本造成的问题不负任何责任,不对脚本执行效果做出任何担保,原则上不提供任何形式的技术支持。 为防止

Sunday 1 Aug 27, 2022
a high-performance, lightweight and human friendly serving engine for scrapy

a high-performance, lightweight and human friendly serving engine for scrapy

Speakol Ads 30 Mar 01, 2022
feapder 是一款简单、快速、轻量级的爬虫框架。以开发快速、抓取快速、使用简单、功能强大为宗旨。支持分布式爬虫、批次爬虫、多模板爬虫,以及完善的爬虫报警机制。

feapder 是一款简单、快速、轻量级的爬虫框架。起名源于 fast、easy、air、pro、spider的缩写,以开发快速、抓取快速、使用简单、功能强大为宗旨,历时4年倾心打造。支持轻量爬虫、分布式爬虫、批次爬虫、爬虫集成,以及完善的爬虫报警机制。 之

boris 1.4k Dec 29, 2022
SmartScraper: 简单、自动、快捷的Python网络爬虫

SmartScraper: 简单、自动、快捷的Python网络爬虫 Note: The origin developer of SmartScraper is Alireza Mika, I only change a little code of AutoScraper. SmartScraper

DaDeng 9 Apr 16, 2022
Console application for downloading images from Reddit in Python

RedditImageScraper Console application for downloading images from Reddit in Python Introduction This short Python script was created for the mass-dow

James 0 Jul 04, 2021
Using Selenium with Python to Web Scrap Popular Youtube Tech Channels.

Web Scrapping Popular Youtube Tech Channels with Selenium Data Mining, Data Wrangling, and Exploratory Data Analysis About the Data Web scrapi

David Rusho 0 Aug 18, 2021
Scrapes Every Email Address of Every Society in Every University

society-email-scrape Site Live at https://kcsoc.github.io/society-email-scrape/ How to automatically generate new data Go to unis.yml Add your uni Cre

Krishna Consciousness Society 18 Dec 14, 2022
Scraping web pages to get data

Scraping Data Get public data and save in database This is project use Python How to run a project 1 - Clone the repository 2 - Install beautifulsoup4

Soccer Project 2 Nov 01, 2021
爬虫案例合集。包括但不限于《淘宝、京东、天猫、豆瓣、抖音、快手、微博、微信、阿里、头条、pdd、优酷、爱奇艺、携程、12306、58、搜狐、百度指数、维普万方、Zlibraty、Oalib、小说、招标网、采购网、小红书》

lxSpider 爬虫案例合集。包括但不限于《淘宝、京东、天猫、豆瓣、抖音、快手、微博、微信、阿里、头条、pdd、优酷、爱奇艺、携程、12306、58、搜狐、百度指数、维普万方、Zlibraty、Oalib、小说网站、招标采购网》 简介: 时光荏苒,记不清写了多少案例了。

lx 793 Jan 05, 2023
A simple app to scrap data from Twitter.

Twitter-Scraping-App A simple app to scrap data from Twitter. Available Features Search query. Select number of data you want to fetch from twitter. C

Davis David 2 Oct 31, 2022