Extract gene TSS site form gencode/ensembl/gencode database GTF file and export bed format file.

Related tags

Web CrawlingGetTss
Overview

GetTss python Package

extract gene TSS site form gencode/ensembl/gencode database GTF file and export bed format file.

Install

$ pip install GetTss

Usage

help infomation:

$ GetTss -h
usage: GetTss --database ucsc --gtffile hg19.ncbiRefSeq.gtf --tssfile testTSS.bed

Get gene TSS site and export bed format from GTF annotation file.

optional arguments:
  -h, --help            show this help message and exit
  -v, --version         show program's version number and exit
  -d {ucsc,ensembl,gencode}, --database {ucsc,ensembl,gencode}
                        which annotation database you choose. (default="ensembl")
  -g GTFFILE, --gtffile GTFFILE
                        input your GTF file. (ucsc/ensembl/gencode)
  -t TSSFILE, --tssfile TSSFILE
                        output your TSS file. (test-TSS.bed)

Thank your for your support, if you have any questions or suggestions please contact me: [email protected].

for ucsc gtf file:

$ GetTss -d ucsc -g hg19.ncbiRefSeq.gtf -t ucsc-TSS.bed
Your job is starting, please wait!
You GTF file have: 104178 transcripts.
 
Your task has down!

$ head -n 3 ucsc-TSS.bed
chrMT   16023   16024   TRNP    .       -
chrMT   15887   15888   TRNT    .       +
chrMT   14746   14747   CYTB    .       +

for gencode/ensembl gtf file:

$ GetTss -d gencode -g gencode.v19.annotation.gtf -t test-TSS.bed
Your job is starting, please wait!
You GTF file have: 57820 genes.

Your task has down!

$ head -n 3 test-TSS.bed
chr1    11868   11869   ENSG00000223972.4       .       +
chr1    29806   29807   ENSG00000227232.4       .       -
chr1    29553   29554   ENSG00000243485.2       .       +

plot peaks density around TSS

compute matrix:

$ computeMatrix reference-point -S normal.bw treat.bw \
                -R myTSS.bed \
                --referencePoint center \
                -a 3000 -b 3000 -p 25 \
                -out matrix.tab.gz

plot Profile:

$ plotProfile -m matrix.tab.gz \
              -out profile.pdf \
              --perGroup \
              --plotTitle 'test profile'
Owner
laojunjun
路漫漫其修远兮 吾将上下而求索
laojunjun
AssistScraper - program for /r/nba to use to find list of all players a player assisted and how many assists each player recieved

AssistScraper - program for /r/nba to use to find list of all players a player assisted and how many assists each player recieved

5 Nov 25, 2021
fork huanghyw/jd_seckill

Jd_Seckill 特别声明: 本仓库发布的jd_seckill项目中涉及的任何脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合法性,准确性,完整性和有效性,请根据情况自行判断。 本项目内所有资源文件,禁止任何公众号、自媒体进行任何形式的转载、发布。

512 Jan 03, 2023
Divar.ir Ads scrapper

Divar.ir Ads Scrapper Introduction This project first asynchronously grab Divar.ir Ads and then save to .csv and .xlsx files named data.csv and data.x

Iman Kermani 4 Aug 29, 2022
A multithreaded tool for searching and downloading images from popular search engines. It is straightforward to set up and run!

🕳️ CygnusX1 Code by Trong-Dat Ngo. Overviews 🕳️ CygnusX1 is a multithreaded tool 🛠️ , used to search and download images from popular search engine

DatNgo 32 Dec 31, 2022
Console application for downloading images from Reddit in Python

RedditImageScraper Console application for downloading images from Reddit in Python Introduction This short Python script was created for the mass-dow

James 0 Jul 04, 2021
Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc)

Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc).

Amit 6 Aug 26, 2022
A simple python web scraper.

Dissec A simple python web scraper. It gets a website and its contents and parses them with the help of bs4. Installation To install the requirements,

11 May 06, 2022
This Spider/Bot is developed using Python and based on Scrapy Framework to Fetch some items information from Amazon

- Hello, This Project Contains Amazon Web-bot. - I've developed this bot for fething some items information on Amazon. - Scrapy Framework in Python is

Khaled Tofailieh 4 Feb 13, 2022
This program will help you to properly scrape all data from a specific website

This program will help you to properly scrape all data from a specific website

MD. MINHAZ 0 May 15, 2022
Web scraped S&P 500 Data from Wikipedia using Pandas and performed Exploratory Data Analysis on the data.

Web scraped S&P 500 Data from Wikipedia using Pandas and performed Exploratory Data Analysis on the data. Then used Yahoo Finance to get the related stock data and displayed them in the form of chart

Samrat Mitra 3 Sep 09, 2022
An helper library to scrape data from Instagram effortlessly, using the Influencer Hunters APIs.

Instagram Scraper An utility library to scrape data from Instagram hassle-free Go to the website » View Demo · Report Bug · Request Feature About The

2 Jul 06, 2022
👁️ Tool for Data Extraction and Web Requests.

httpmapper 👁️ Project • Technologies • Installation • How it works • License Project 🚧 For educational purposes. This is a project that I developed,

15 Dec 05, 2021
This program scrapes information and images for movies and TV shows.

Media-WebScraper This program scrapes information and images for movies and TV shows. Summary For more information on the program, read the WebScrape_

1 Dec 05, 2021
河南工业大学 完美校园 自动校外打卡

HAUT-checkin 河南工业大学自动校外打卡 由于github actions存在明显延迟,建议直接使用腾讯云函数 特点 多人打卡 使用简单,仅需账号密码以及用于微信推送的uid 自动获取上一次打卡信息用于打卡 向所有成员微信单独推送打卡状态 完美校园服务器繁忙时造成打卡失败会自动重新打卡

36 Oct 27, 2022
tweet random sand cat pictures

sandcatbot setup pip3 install --user -r requirements.txt cp sandcatbot.example.conf sandcatbot.conf vim sandcatbot.conf running the first parameter i

jess 8 Aug 07, 2022
Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo.

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo. (Todas as infomações)

Guilherme Silva Uchoa 3 Oct 04, 2022
A distributed crawler for weibo, building with celery and requests.

A distributed crawler for weibo, building with celery and requests.

SpiderClub 4.8k Jan 03, 2023
Web Scraping Framework

Grab Framework Documentation Installation $ pip install -U grab See details about installing Grab on different platforms here http://docs.grablib.

2.3k Jan 04, 2023
一个m3u8视频流下载脚本

一个Python的m3u8流视频下载脚本 介绍 m3u8流视频日益常见,目前好用的下载器也有很多,我把之前自己写的一个小脚本分享出来,供广大网友使用。写此程序的目的在于给视频下载爱好者提供一个下载样例,可直接调用,勿再重复造轮子。 使用方法 在python中直接运行程序或进行外部调用 import

Nchu 0 Oct 10, 2021
A python script to extract answers to any question on Quora (Quora+ included)

quora-plus-bypass A python script to extract answers to any question on Quora (Quora+ included) Requirements Python 3.x

Nitin Narayanan 10 Aug 18, 2022