peptides.py is a pure-Python package to compute common descriptors for protein sequences

Overview

peptides.py Stars

Physicochemical properties and indices for amino-acid sequences.

Actions Coverage PyPI Wheel Python Versions Python Implementations License Source Mirror GitHub issues Changelog Downloads

🗺️ Overview

peptides.py is a pure-Python package to compute common descriptors for protein sequences. It is a port of Peptides, the R package written by Daniel Osorio for the same purpose. This library has no external dependency and is available for all modern Python versions (3.6+).

🔧 Installing

Install the peptides package directly from PyPi which hosts universal wheels that can be installed with pip:

$ pip install peptides

💡 Example

Start by creating a Peptide object from a protein sequence:

>>> import peptides
>>> peptide = peptides.Peptide("MLKKRFLGALAVATLLTLSFGTPVMAQSGSAVFTNEGVTPFAISYPGGGT")

Then use the appropriate methods to compute the descriptors you want:

>>> peptide.aliphatic_index()
89.8...
>>> peptide.boman()
-0.2097...
>>> peptide.charge(pH=7.4)
1.99199...
>>> peptide.isoelectric_point()
10.2436...

Methods that return more than one scalar value (for instance, Peptide.blosum_indices) will return a dedicated named tuple:

>>> peptide.ms_whim_scores()
MSWHIMScores(mswhim1=-0.436399..., mswhim2=0.4916..., mswhim3=-0.49200...)

Use the Peptide.descriptors method to get a dictionary with every available descriptor. This makes it very easy to create a pandas.DataFrame with descriptors for several protein sequences:

>> df = pandas.DataFrame([ peptides.Peptide(s).descriptors() for s in seqs ]) >>> df BLOSUM1 BLOSUM2 BLOSUM3 BLOSUM4 ... Z2 Z3 Z4 Z5 0 0.367000 -0.436000 -0.239 0.014500 ... -0.711000 -0.104500 -1.486500 0.429500 1 -0.697500 -0.372500 -0.493 0.157000 ... -0.307500 -0.627500 -0.450500 0.362000 2 0.479333 -0.001333 0.138 0.228667 ... -0.299333 0.465333 -0.976667 0.023333 [3 rows x 66 columns] ">
>>> seqs = ["SDKEVDEVDAALSDLEITLE", "ARQQNLFINFCLILIFLLLI", "EGVNDNECEGFFSAR"]
>>> df = pandas.DataFrame([ peptides.Peptide(s).descriptors() for s in seqs ])
>>> df
    BLOSUM1   BLOSUM2  BLOSUM3   BLOSUM4  ...        Z2        Z3        Z4        Z5
0  0.367000 -0.436000   -0.239  0.014500  ... -0.711000 -0.104500 -1.486500  0.429500
1 -0.697500 -0.372500   -0.493  0.157000  ... -0.307500 -0.627500 -0.450500  0.362000
2  0.479333 -0.001333    0.138  0.228667  ... -0.299333  0.465333 -0.976667  0.023333

[3 rows x 66 columns]

💭 Feedback

⚠️ Issue Tracker

Found a bug ? Have an enhancement request ? Head over to the GitHub issue tracker if you need to report or ask something. If you are filing in on a bug, please include as much information as you can about the issue, and try to recreate the same bug in a simple, easily reproducible situation.

🏗️ Contributing

Contributions are more than welcome! See CONTRIBUTING.md for more details.

⚖️ License

This library is provided under the GNU General Public License v3.0. The original R Peptides package was written by Daniel Osorio, Paola Rondón-Villarreal and Rodrigo Torres, and is licensed under the terms of the GPLv2.

This project is in no way not affiliated, sponsored, or otherwise endorsed by the original Peptides authors. It was developed by Martin Larralde during his PhD project at the European Molecular Biology Laboratory in the Zeller team.

You might also like...
Python Package for DataHerb: create, search, and load datasets.
Python Package for DataHerb: create, search, and load datasets.

The Python Package for DataHerb A DataHerb Core Service to Create and Load Datasets.

wikirepo is a Python package that provides a framework to easily source and leverage standardized Wikidata information
wikirepo is a Python package that provides a framework to easily source and leverage standardized Wikidata information

Python based Wikidata framework for easy dataframe extraction wikirepo is a Python package that provides a framework to easily source and leverage sta

Python package for processing UC module spectral data.

UC Module Python Package How To Install clone repo. cd UC-module pip install . How to Use uc.module.UC(measurment=str, dark=str, reference=str, heade

sportsdataverse python package
sportsdataverse python package

sportsdataverse-py See CHANGELOG.md for details. The goal of sportsdataverse-py is to provide the community with a python package for working with spo

PyEmits, a python package for easy manipulation in time-series data.
PyEmits, a python package for easy manipulation in time-series data.

PyEmits, a python package for easy manipulation in time-series data. Time-series data is very common in real life. Engineering FSI industry (Financial

Retail-Sim is python package to easily create synthetic dataset of retaile store.

Retailer's Sale Data Simulation Retail-Sim is python package to easily create synthetic dataset of retaile store. Simulation Model Simulator consists

A python package which can be pip installed to perform statistics and visualize binomial and gaussian distributions of the dataset

GBiStat package A python package to assist programmers with data analysis. This package could be used to plot : Binomial Distribution of the dataset p

VevestaX is an open source Python package for ML Engineers and Data Scientists.
VevestaX is an open source Python package for ML Engineers and Data Scientists.

VevestaX Track failed and successful experiments as well as features. VevestaX is an open source Python package for ML Engineers and Data Scientists.

nrgpy is the Python package for processing NRG Data Files

nrgpy nrgpy is the Python package for processing NRG Data Files Website and source: https://github.com/nrgpy/nrgpy Documentation: https://nrgpy.github

Comments
  • Per-residue data

    Per-residue data

    It seems that the API can only output single statistics for the entire peptide chain, but I'm interested in statistics for each residue individually. I'm wondering if it might be possible to output an array/list from some of these functions instead of always averaging them as is done now.

    enhancement 
    opened by multimeric 1
  • Hydrophobic moment is inconsistent with R version

    Hydrophobic moment is inconsistent with R version

    Computed hydrophobic moment is not the same as the one computed by R. More specifically, it seems that peptides.py always outputs 0 for the hydrophobic moment when peptide length is shorter than the set window. The returned value matches the value from R when peptide length is equal to or greater than the set window length.

    Example in python:

    >>> import peptides`
    >>> peptides.Peptide("MLK").hydrophobic_moment(window=5, angle=100)
    0.0
    >>> peptides.Peptide("AACQ").hydrophobic_moment(window=5, angle=100)
    0.0
    >>> peptides.Peptide("FGGIQ").hydrophobic_moment(window=5, angle=100)
    0.31847187610377536
    

    Example in R:

    > library(Peptides)
    > hmoment(seq="MLK", window=5, angle=100)
    [1] 0.8099386
    > hmoment(seq="AACQ", window=5, angle=100)
    [1] 0.3152961
    > hmoment(seq="FGGIQ", window=5, angle=100)
    [1] 0.3184719
    

    I think that it can be easily fixed by internally setting the window length to the length of the peptide if the latter is shorter. What I propose:

    --- a/peptides/__init__.py
    +++ b/peptides/__init__.py
    @@ -657,6 +657,7 @@ class Peptide(typing.Sequence[str]):
                   :doi:`10.1073/pnas.81.1.140`. :pmid:`6582470`.
    
             """
    +        window = min(window, len(self))
             scale = tables.HYDROPHOBICITY["Eisenberg"]
             lut = [scale.get(aa, 0.0) for aa in self._CODE1]
             angles = [(angle * i) % 360 for i in range(window)]
    
    bug 
    opened by eotovic 1
  • RuntimeWarning in auto_correlation function()

    RuntimeWarning in auto_correlation function()

    Hi, thank you for creating peptides.py.

    Some hydrophobicity tables together with certain proteins cause a runtime warning for in the function auto_correlation():

    import peptides
    
    for hydro in peptides.tables.HYDROPHOBICITY.keys():
        print(hydro)
        table = peptides.tables.HYDROPHOBICITY[hydro]
        peptides.Peptide('MANTQNISIWWWAR').auto_correlation(table)
    

    Warning (s2 == 0):

    RuntimeWarning: invalid value encountered in double_scalars
      return s1 / s2
    

    The tables concerned are: octanolScale_pH2, interfaceScale_pH2, oiScale_pH2 Some other proteins causing the same warning: ['MSYGGSCAGFGGGFALLIVLFILLIIIGCSCWGGGGYGY', 'MFILLIIIGASCFGGGGGCGYGGYGGYAGGYGGYCC', 'MSFGGSCAGFGGGFALLIVLFILLIIIGCSCWGGGGGF']

    opened by jhahnfeld 0
Releases(v0.3.1)
  • v0.3.1(Sep 1, 2022)

  • v0.3.0(Sep 1, 2022)

    Added

    • Peptide.linker_preference_profile to build a profile like used in the DomCut method from Suyama & Ohara (2002).
    • Peptide.profile to build a generic per-residue profile from a data table (#3).
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Oct 25, 2021)

    Added

    • Peptide.counts method to get the number of occurences of each amino acid in the peptide.
    • Peptide.frequencies to get the frequencies of each amino acid in the peptide.
    • Peptide.pcp_descriptors to compute the PCP descriptors from Mathura & Braun (2001).
    • Peptide.sneath_vectors to compute the descriptors from Sneath (1966).
    • Hydrophilicity descriptors from Barley (2018).
    • Peptide.structural_class to predict the structural class of a protein using one of three reference datasets and one of four distance metrics.

    Changed

    • Peptide.aliphatic_index now supports unknown Leu/Ile residue (code J).
    • Swap order of Peptide.hydrophobic_moment arguments for consistency with profile methods.
    • Some Peptide functions now support vectorized code using numpy if available.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Oct 21, 2021)

Owner
Martin Larralde
PhD candidate in Bioinformatics, passionate about programming, Pythonista, Rustacean. I write poems, and sometimes they are executable.
Martin Larralde
Multiple Pairwise Comparisons (Post Hoc) Tests in Python

scikit-posthocs is a Python package that provides post hoc tests for pairwise multiple comparisons that are usually performed in statistical data anal

Maksim Terpilowski 264 Dec 30, 2022
Techdegree Data Analysis Project 2

Basketball Team Stats Tool In this project you will be writing a program that reads from the "constants" data (PLAYERS and TEAMS) in constants.py. Thi

2 Oct 23, 2021
Tools for working with MARC data in Catalogue Bridge.

catbridge_tools Tools for working with MARC data in Catalogue Bridge. Borrows heavily from PyMarc

1 Nov 11, 2021
Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data.

Hatchet Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data. It is intended for analyzing

Lawrence Livermore National Laboratory 14 Aug 19, 2022
Python scripts aim to use a Random Forest machine learning algorithm to predict the water affinity of Metal-Organic Frameworks

The following Python scripts aim to use a Random Forest machine learning algorithm to predict the water affinity of Metal-Organic Frameworks (MOFs). The training set is extracted from the Cambridge S

1 Jan 09, 2022
yt is an open-source, permissively-licensed Python library for analyzing and visualizing volumetric data.

The yt Project yt is an open-source, permissively-licensed Python library for analyzing and visualizing volumetric data. yt supports structured, varia

The yt project 367 Dec 25, 2022
Python beta calculator that retrieves stock and market data and provides linear regressions.

Stock and Index Beta Calculator Python script that calculates the beta (β) of a stock against the chosen index. The script retrieves the data and resa

sammuhrai 4 Jul 29, 2022
Basis Set Format Converter

Basis Set Format Converter Repository for the online tool that allows you to enter a basis set in the form of text input for a variety of Quantum Chem

Manas Sharma 3 Jun 27, 2022
Conduits - A Declarative Pipelining Tool For Pandas

Conduits - A Declarative Pipelining Tool For Pandas Traditional tools for declaring pipelines in Python suck. They are mostly imperative, and can some

Kale Miller 7 Nov 21, 2021
AWS Glue ETL Code Samples

AWS Glue ETL Code Samples This repository has samples that demonstrate various aspects of the new AWS Glue service, as well as various AWS Glue utilit

AWS Samples 1.2k Jan 03, 2023
A CLI tool to reduce the friction between data scientists by reducing git conflicts removing notebook metadata and gracefully resolving git conflicts.

databooks is a package for reducing the friction data scientists while using Jupyter notebooks, by reducing the number of git conflicts between different notebooks and assisting in the resolution of

dataroots 86 Dec 25, 2022
fds is a tool for Data Scientists made by DAGsHub to version control data and code at once.

Fast Data Science, AKA fds, is a CLI for Data Scientists to version control data and code at once, by conveniently wrapping git and dvc

DAGsHub 359 Dec 22, 2022
Vectorizers for a range of different data types

Vectorizers for a range of different data types

Tutte Institute for Mathematics and Computing 69 Dec 29, 2022
Exploratory data analysis

Exploratory data analysis An Exploratory data analysis APP TAPIWA CHAMBOKO 🚀 About Me I'm a full stack developer experienced in deploying artificial

tapiwa chamboko 1 Nov 07, 2021
PLStream: A Framework for Fast Polarity Labelling of Massive Data Streams

PLStream: A Framework for Fast Polarity Labelling of Massive Data Streams Motivation When dataset freshness is critical, the annotating of high speed

4 Aug 02, 2022
Utilize data analytics skills to solve real-world business problems using Humana’s big data

Humana-Mays-2021-HealthCare-Analytics-Case-Competition- The goal of the project is to utilize data analytics skills to solve real-world business probl

Yongxian (Caroline) Lun 1 Dec 27, 2021
MotorcycleParts DataAnalysis python

We work with the accounting department of a company that sells motorcycle parts. The company operates three warehouses in a large metropolitan area.

NASEEM A P 1 Jan 12, 2022
Common bioinformatics database construction

biodb Common bioinformatics database construction 1.taxonomy (Substance classification database) Download the database wget -c https://ftp.ncbi.nlm.ni

sy520 2 Jan 04, 2022
BIGDATA SIMULATION ONE PIECE WORLD CENSUS

ONE PIECE is a Japanese manga of great international success. The story turns inhabited in a fictional world, tells the adventures of a young man whose body gained rubber properties after accidentall

Maycon Cypriano 3 Jun 30, 2022
Predictive Modeling & Analytics on Home Equity Line of Credit

Predictive Modeling & Analytics on Home Equity Line of Credit Data (Python) HMEQ Data Set In this assignment we will use Python to examine a data set

Dhaval Patel 1 Jan 09, 2022