Accurate identification of bacteriophages from metagenomic data using Transformer

Related tags

Deep LearningPhaMer
Overview

PhaMer

PhaMer is a python library for identifying bacteriophages from metagenomic data. PhaMer is based on a Transorfer model and rely on protein-based vocabulary to convert DNA sequences into sentences.

Overview

The main function of PhaMer is to identify phage-like contigs from metagenomic data. The input of the program should be fasta files and the output will be a csv file showing the predictions. Since it is a Deep learning model, if you have GPU units on your PC, we recommand you to use them to save your time.

If you have any trouble installing or using PhaMer, please let us know by opening an issue on GitHub or emailing us ([email protected]).

Required Dependencies

If you want to use the gpu to accelerate the program:

  • cuda

  • Pytorch-gpu

  • For cpu version pytorch: conda install pytorch torchvision torchaudio cpuonly -c pytorch

  • For gpu version pytorch: Search pytorch to find the correct cuda version according to your computer

An easiler way to install

Note: we suggest you to install all the package using conda (both miniconda and Anaconda are ok).

After cloning this respository, you can use anaconda to install the PhaMer.yaml. This will install all packages you need with gpu mode (make sure you have installed cuda on your system to use the gpu version. Othervise, it will run with cpu version). The command is: conda env create -f PhaMer.yaml -n phamer

Prepare the database and environment

Due to the limited size of the GitHub, we zip the database. Before using PhaMer, you need to unpack them using the following commands.

  1. When you use PhaMer at the first time
cd PhaMer/
conda env create -f PhaMer.yaml -n phamer
conda activate phamer
cd database/
bzip2 -d database.fa.bz2
git lfs install
rm transformer.pth
git checkout .
cd ..

Note: Because the parameter is larger than 100M, please make sure you have installed git-lfs to downloaded it from GitHub

  1. If the example can be run without any but bugs, you only need to activate your 'phamer' environment before using PhaMer.
conda activate phamer

Usage

python preprocessing.py [--contigs INPUT_FA] [--len MINIMUM_LEN]
python PhaMer.py [--out OUTPUT_CSV] [--reject THRESHOLD]

Options

  --contigs INPUT_FA
                        input fasta file
  --len MINIMUM_LEN
                        predict only for sequence >= len bp (default 3000)
  --out OUTPUT_CSV
                        The output csv file (prediction)
  --reject THRESHOLD
                        Threshold to reject prophage. The higher the value, the more prophage will be rejected (default 0.3)

Example

Prediction on the example file:

python preprocessing.py --contigs test_contigs.fa
python PhaMer.py --out example_prediction.csv

The prediction will be written in example_prediction.csv. The CSV file has three columns: contigs names, prediction, and prediction score.

References

The paper is submitted to the ISMB 2022.

The arXiv version can be found via: Accurate identification of bacteriophages from metagenomic data using Transformer

Contact

If you have any questions, please email us: [email protected]

Comments
  • issues collections from schackartk (solved)

    issues collections from schackartk (solved)

    Hi! Thank you for publishing your code publicly.

    I am a researcher who works with many tools that identify phage in metagenomes. However, I like to be confident in the implementation of the concepts. I noticed that your repository does not have any formal testing. Without tests, I am always skeptical about implementing a tool in my own work because I cannot be sure it is working as described.

    Would your team be interested in adding tests to the code (e.g. using pytest)? If I, or another developer, were to create a pull request that implemented testing, would your team consider accepting such a request?

    Also, it is a small thing, but I noticed that your code is not formatted in any community-accepted way. Would you consider accepting a pull request that has passed the code through a linter such as yapf or black? I usually add linting as part of my test suites.

    opened by schackartk 12
  • Rename preprocessing.py?

    Rename preprocessing.py?

    Hi Kenneth,

    preprocessing.py is a pretty generic name; maybe rename the script to PhaMer_preprocess.py to avoid potential future conflicts with other software?

    opened by sjaenick 1
  • bioconda recipe

    bioconda recipe

    Any plans on creating a bioconda recipe for PhaMer? That would greatly help users with the install & version management of PhaMer.

    Also in regards to:

    Because the parameter is larger than 100M, please make sure you have downloaded transformer.pth correctly.

    Why not just use md5sum?

    opened by nick-youngblut 1
  • Threading/Performance updates

    Threading/Performance updates

    Hi,

    • introduce ---threads to control threading behavior
    • allow to supply external database directory, so DIAMOND database formatting isn't needed every time
    • use 'pprodigal' for faster gene prediction step
    • removed unused imports

    Please note I didn't yet add pprodigal to the conda yaml - feel free to do so if you want to include it

    opened by sjaenick 1
  • Bug: Unable to clone repository

    Bug: Unable to clone repository

    Hello,

    It seems that this repository is exceeding its data transfer limits. I believe you are aware of this, as you instruct users to download the transformer.pth from Google Drive.

    However, it seems that I cannot clone the repository in general. I just want to make sure this is not a problem on my end, so I will walk through what I am doing.

    Reproducible Example

    First, cloning the repository

    $ git clone [email protected]:KennthShang/PhaMer.git
    Cloning into 'PhaMer'...
    Downloading database/transformer.pth (143 MB)
    Error downloading object: database/transformer.pth (28a82c1): Smudge error: Error downloading database/transformer.pth (28a82c1ca0fb2499c0071c685dbf49f3a0d060fdc231bb04f7535e88e7fe0858): batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
    
    Errors logged to /xdisk/bhurwitz/mig2020/rsgrps/bhurwitz/schackartk/projects/PhaMer/.git/lfs/logs/20220124T113548.16324098.log
    Use `git lfs logs last` to view the log.
    error: external filter git-lfs smudge -- %f failed 2
    error: external filter git-lfs smudge -- %f failed
    fatal: database/transformer.pth: smudge filter lfs failed
    warning: Clone succeeded, but checkout failed.
    You can inspect what was checked out with 'git status'
    and retry the checkout with 'git checkout -f HEAD'
    

    Checking git status as suggested indicates that several files were not checked out

    $ cd PhaMer/
    $ git status
    # On branch main
    # Changes to be committed:
    #   (use "git reset HEAD <file>..." to unstage)
    #
    #	deleted:    .gitattributes
    #	deleted:    LICENSE.txt
    #	deleted:    PhaMer.py
    #	deleted:    PhaMer.yaml
    #	deleted:    README.md
    #	deleted:    database/.DS_Store
    #	deleted:    database/contigs.csv
    #	deleted:    database/database.fa.bz2
    #	deleted:    database/pc2wordsid.dict
    #	deleted:    database/pcs.csv
    #	deleted:    database/profiles.csv
    #	deleted:    database/proteins.csv
    #	deleted:    database/transformer.pth
    #	deleted:    logo.jpg
    #	deleted:    model.py
    #	deleted:    preprocessing.py
    #	deleted:    test_contigs.fa
    #
    # Untracked files:
    #   (use "git add <file>..." to include in what will be committed)
    #
    #	.gitattributes
    #	LICENSE.txt
    #	PhaMer.py
    #	PhaMer.yaml
    #	README.md
    #	database/
    

    To confirm that several files are missing, such as preprocessing.py.

    $ ls
    database  LICENSE.txt  PhaMer.py  PhaMer.yaml  README.md
    

    Continuing with installation instructions anyway in case they resolve these issues.

    $ conda env create -f PhaMer.yaml -n phamer
    Collecting package metadata (repodata.json): done
    Solving environment: done
    Preparing transaction: done
    Verifying transaction: done
    Executing transaction: \ By downloading and using the CUDA Toolkit conda packages, you accept the terms and conditions of the CUDA End User License Agreement (EULA): https://docs.nvidia.com/cuda/eula/index.html
    
    done
    Installing pip dependencies: / Ran pip subprocess with arguments:
    ['/home/u29/schackartk/.conda/envs/phamer/bin/python', '-m', 'pip', 'install', '-U', '-r', '/xdisk/bhurwitz/mig2020/rsgrps/bhurwitz/schackartk/projects/PhaMer/condaenv.1nhq2jy1.requirements.txt']
    Pip subprocess output:
    Collecting joblib==1.1.0
      Using cached joblib-1.1.0-py2.py3-none-any.whl (306 kB)
    Collecting scikit-learn==1.0.1
      Using cached scikit_learn-1.0.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (25.9 MB)
    Collecting sklearn==0.0
      Using cached sklearn-0.0-py2.py3-none-any.whl
    Collecting threadpoolctl==3.0.0
      Using cached threadpoolctl-3.0.0-py3-none-any.whl (14 kB)
    Requirement already satisfied: scipy>=1.1.0 in /home/u29/schackartk/.conda/envs/phamer/lib/python3.8/site-packages (from scikit-learn==1.0.1->-r /xdisk/bhurwitz/mig2020/rsgrps/bhurwitz/schackartk/projects/PhaMer/condaenv.1nhq2jy1.requirements.txt (line 2)) (1.7.1)
    Requirement already satisfied: numpy>=1.14.6 in /home/u29/schackartk/.conda/envs/phamer/lib/python3.8/site-packages (from scikit-learn==1.0.1->-r /xdisk/bhurwitz/mig2020/rsgrps/bhurwitz/schackartk/projects/PhaMer/condaenv.1nhq2jy1.requirements.txt (line 2)) (1.21.2)
    Installing collected packages: threadpoolctl, joblib, scikit-learn, sklearn
    Successfully installed joblib-1.1.0 scikit-learn-1.0.1 sklearn-0.0 threadpoolctl-3.0.0
    
    done
    #
    # To activate this environment, use
    #
    #     $ conda activate phamer
    #
    # To deactivate an active environment, use
    #
    #     $ conda deactivate
    

    Activating the conda environment and attempting to get transformer.pth

    $ conda activate phamer
    $ cd database
    $ bzip2 -d database.fa.bz2
    $ git lfs install
    Updated git hooks.
    Git LFS initialized.
    $ rm transformer.pth
    rm: cannot remove ‘transformer.pth’: No such file or directory
    $ git checkout .
    error: pathspec './' did not match any file(s) known to git.
    
    $ ls
    contigs.csv  database.fa  pc2wordsid.dict  pcs.csv  profiles.csv  proteins.csv
    

    Since the file doesn't seem to exist, I followed your Google Drive link and pasted into database/ manually.

    $ ls
    contigs.csv  database.fa  pc2wordsid.dict  pcs.csv  profiles.csv  proteins.csv  transformer.pth
    

    I will try checking out again.

    $ git checkout .
    error: pathspec './' did not match any file(s) known to git.
    

    Going back up, you can see that I am still missing scripts.

    $ cd ..
    $ ls
    metaphinder_reprex  phage_finders  PhaMer  snakemake_tutorial
    

    Conclusions

    I am missing the scripts and cannot run the tool. I believe this all comes down to the repo exceedingits data transfer limits. This is probably due to you storing the large database files in the repository.

    Possible solution?

    Going forward, maybe entirely remove the large files from the repo so that you don't exceed limits. I am not sure what I can do at this moment since the limits are already exceeded.

    Also, it would be helpful if I could obtain the transformer.pth from the command line (e.g. using wget) since I, and many researchers, are working on an HPC or cloud.

    Thank you, -Ken

    opened by schackartk 1
Releases(v1.0)
Owner
Kenneth Shang
Kenneth Shang
Python package for visualizing the loss landscape of parameterized quantum algorithms.

orqviz A Python package for easily visualizing the loss landscape of Variational Quantum Algorithms by Zapata Computing Inc. orqviz provides a collect

Zapata Computing, Inc. 75 Dec 30, 2022
gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks.

gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks. It is built on top of the OpenAI G

Robin Henry 99 Dec 12, 2022
Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

Philipp Erler 329 Jan 06, 2023
This is a computer vision based implementation of the popular childhood game 'Hand Cricket/Odd or Even' in python

Hand Cricket Table of Content Overview Installation Game rules Project Details Future scope Overview This is a computer vision based implementation of

Abhinav R Nayak 6 Jan 12, 2022
An all-in-one application to visualize multiple different local path planning algorithms

Table of Contents Table of Contents Local Planner Visualization Project (LPVP) Features Installation/Usage Local Planners Probabilistic Roadmap (PRM)

Abdur Javaid 47 Dec 30, 2022
Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Trevor Ablett*, Bryan Chan*,

STARS Laboratory 8 Sep 14, 2022
Tooling for converting STAC metadata to ODC data model

手语识别 0、使用到的模型 (1). openpose,作者:CMU-Perceptual-Computing-Lab https://github.com/CMU-Perceptual-Computing-Lab/openpose (2). 图像分类classification,作者:Bubbl

Open Data Cube 65 Dec 20, 2022
My usage of Real-ESRGAN to upscale anime, some test and results in the test_img folder

anime upscaler My usage of Real-ESRGAN to upscale anime, I hope to use this on a proper GPU cuz doing this on CPU is completely shit 😂 , I even tried

Shangar Muhunthan 29 Jan 07, 2023
Code for Recurrent Mask Refinement for Few-Shot Medical Image Segmentation (ICCV 2021).

Recurrent Mask Refinement for Few-Shot Medical Image Segmentation Steps Install any missing packages using pip or conda Preprocess each dataset using

XIE LAB @ UCI 39 Dec 08, 2022
Dados coletados e programas desenvolvidos no processo de iniciação científica

Iniciacao_cientifica_FAPESP_2020-14845-6 Dados coletados e programas desenvolvidos no processo de iniciação científica Os arquivos .py são os programa

1 Jan 10, 2022
GLNet for Memory-Efficient Segmentation of Ultra-High Resolution Images

GLNet for Memory-Efficient Segmentation of Ultra-High Resolution Images Collaborative Global-Local Networks for Memory-Efficient Segmentation of Ultra-

VITA 298 Dec 12, 2022
[NeurIPS 2021] Code for Unsupervised Learning of Compositional Energy Concepts

Unsupervised Learning of Compositional Energy Concepts This is the pytorch code for the paper Unsupervised Learning of Compositional Energy Concepts.

45 Nov 30, 2022
Official Implementation for Fast Training of Neural Lumigraph Representations using Meta Learning.

Fast Training of Neural Lumigraph Representations using Meta Learning Project Page | Paper | Data Alexander W. Bergman, Petr Kellnhofer, Gordon Wetzst

Alex 39 Oct 08, 2022
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Dec 30, 2022
Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of images as "pixels"

picinpics Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of

RodrigoCMoraes 1 Oct 24, 2021
Empirical Study of Transformers for Source Code & A Simple Approach for Handling Out-of-Vocabulary Identifiers in Deep Learning for Source Code

Transformers for variable misuse, function naming and code completion tasks The official PyTorch implementation of: Empirical Study of Transformers fo

Bayesian Methods Research Group 56 Nov 15, 2022
PaSST: Efficient Training of Audio Transformers with Patchout

PaSST: Efficient Training of Audio Transformers with Patchout This is the implementation for Efficient Training of Audio Transformers with Patchout Pa

165 Dec 26, 2022
Project of 'TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement '

TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement Codes for TMM20 paper "TBEFN: A Two-branch Exposure-fusion Network for Low

KUN LU 31 Nov 06, 2022
A flexible submap-based framework towards spatio-temporally consistent volumetric mapping and scene understanding.

Panoptic Mapping This package contains panoptic_mapping, a general framework for semantic volumetric mapping. We provide, among other, a submap-based

ETHZ ASL 194 Dec 20, 2022
Coursera - Quiz & Assignment of Coursera

Coursera Assignments This repository is aimed to help Coursera learners who have difficulties in their learning process. The quiz and programming home

浅梦 828 Jan 04, 2023