Stitch together Nanopore tiled amplicon data without polishing a reference

Related tags

Data AnalysisLilo
Overview

logo_dark_white

Stitch together Nanopore tiled amplicon data using a reference guided approach

Tiled amplicon data, like those produced from primers designed with primal scheme, are typically assembled using methods that involve aligning them to a reference and polishing the reference into a sequence that represents the reads. This works very well for obtaining a genome with SNPs and small indels representative of the reads. However in cases where the reads cannot be mapped well to the reference (e.g. genomes containing hypervariable regions between primers) or in cases where large structrual variants are expected this method may fail as polishing tools expect the reference to originate from the reads.

Lilo uses a reference only to assign reads to the amplicon they originated from and to order and orient the polished amplicons, no reference sequence is incorporated into the final assembly. Once assigned to an amplicon, a read with high average base quality of roughly median length for that amplicon is selected as a reference and polished with up to 300x coverage three times with medaka. The polished amplicons have primers removed with porechop (fork: https://github.com/sclamons/Porechop-1) and are then assembled with scaffold_builder.

Lilo has been tested on SARS-CoV-2 with artic V3 primers. It has also been tested on 7kb and 4kb amplicons with ~100-1000bp overlaps for ASFV, PRRSV-1 and PRRSV-2, schemes for which will be made available in the near future.

Requirments not covered by conda

Install Conda :)
Install this fork of porechop and make sure it is in your path: https://github.com/sclamons/Porechop-1

Installation

git clone https://github.com/amandawarr/Lilo  
cd Lilo  
conda env create --file LILO.yaml  
conda env create --file scaffold_builder.yaml

Usage

Lilo assumes your reads are in a folder called raw/ and have the suffix .fastq.gz. Multiple samples can be processed at the same time.
Lilo requires a config file detailing the location of a reference, a primer scheme (in the form of a primal scheme style bed file), and a primers.csv file (described below).

conda activate LILO
snakemake -k -s /path/to/LILO --configfile /path/to/config.file --cores N

It is recommended to run with -k so that one sample with insufficient coverage will not stop the other jobs completing.

Input specifications

  • config.file: an example config file has been provided.
  • Primer scheme: As output by primal scheme, with alt primers removed. Bed file of primer alignment locations. Columns: reference name, start, end, primer name, pool (must end with 1 or 2).
  • Primers.csv: Comma delimited, includes alt primers, with header line. Columns: amplicon_name, F_primer_name, F_primer_sequence, R_primer_name, R_primer_sequence. If there are a lot of degenerate bases in any of the primers it is recommended to expand these, the script expand.py will expand the described csv into a longer csv with IUPAC codes expanded.
  • reference.fasta Same reference used to make the scheme file.

Output

Lilo uses the names from raw/ to name the output file. For a file named "sample.fastq.gz", the final assembly will be named "sample_Scaffold.fasta", and files produced during the pipeline will be in a folder called "sample". The output will contain amplicons that had at least 40X full length coverage. Missing amplicons will be represented by Ns. Any ambiguity at overlaps will be indicated with IUPAC codes.

Note

  • Use of the wrong fork for porechop will cause the pipeline to fail.
  • Lilo is a work in progress and has been tested on a limited number of references, amplicon sizes, and overlap sizes, I recommend checking the results carefully for each new scheme.
  • The pipeline currently assumes that any structural variants are contained between the primers of an amplicon and do not change the length of the amplicon by more than 5%. If alt amplicons produce a product of a different length to the original amplicon they may not be allocated to their amplicon. Editing it to work better with alt amplicons is on my to do list.
  • Should not be used with reads produced with rapid kits, the pipeline assumes the reads are the length of the amplicons.
  • Do let me know if it destroys any cities or steals everyone's left shoe.
You might also like...
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Utilize data analytics skills to solve real-world business problems using Humana’s big data

Humana-Mays-2021-HealthCare-Analytics-Case-Competition- The goal of the project is to utilize data analytics skills to solve real-world business probl

Python data processing, analysis, visualization, and data operations

Python This is a Python data processing, analysis, visualization and data operations of the source code warehouse, book ISBN: 9787115527592 Descriptio

PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift
PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift

Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift This project is composed of two parts: Part1 and Part2

Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials
Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Data Scientist Learning Plan Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j.

PostQF Copyright © 2022 Ralph Seichter PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j. See the ma

Catalogue data - A Python Scripts to prepare catalogue data

catalogue_data Scripts to prepare catalogue data. Setup Clone this repo. Install

NumPy and Pandas interface to Big Data
NumPy and Pandas interface to Big Data

Blaze translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems. Blaze allows Python users a familiar inte

:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark
:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark

To launch a live notebook server to test optimus using binder or Colab, click on one of the following badges: Optimus is the missing framework to prof

Comments
  • Error in rule reporechop:

    Error in rule reporechop:

    Hello, While running the sample dataset, I have encoutered the following error messages. I have made such that prochop is installed correctly and in the path.

    Any help is greatly appreciated.

    Error in rule reporechop: jobid: 2 output: FAT94769_pass_barcode02_66883b35_0/polished_trimmed.fa shell: porechop --adapter_threshold 72 --end_threshold 70 --end_size 30 --extra_end_trim 5 --min_trim_size 3 -f ASFV.primers.csv -i FAT94769_pass_barcode02_66883b35_0/polished_clipped_amplicons.fa --threads 8 --no_split -o FAT94769_pass_barcode02_66883b35_0/polished_trimmed.fa (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)

    opened by tboonf 1
  • Error while running LILO

    Error while running LILO

    Dear, I get the following error while running LILO. Any idea what could be the problem?

    /bin/bash: /home/minion/anaconda3/envs/LILO/etc/profile.d/conda.sh: No such file or directory
    
    CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
    To initialize your shell, run
    
        $ conda init <SHELL_NAME>
    
    Currently supported shells are:
      - bash
      - fish
      - tcsh
      - xonsh
      - zsh
      - powershell
    
    See 'conda init --help' for more information and options.
    
    IMPORTANT: You may need to close and restart your shell after running 'conda init'.
    
    
    /bin/bash: line 2: scaffold_builder.py: command not found
    sed: can't read reads_24h_Scaffold.fasta: No such file or directory
    [Wed Aug 10 11:12:28 2022]
    Error in rule scaffold:
        jobid: 1
        output: reads_24h_Scaffold.fasta
        shell:
            source $CONDA_PREFIX/etc/profile.d/conda.sh
                    conda activate scaffold_builder
                    scaffold_builder.py -i 75 -t 3693 -g 80000 -r /home/minion/lilo-test/ASFV.reference.fasta -q reads_24h/polished_trimmed.fa -p reads_24h
                    sed -i '1 s/^.*$/>reads_24h_Lilo_scaffold/' reads_24h_Scaffold.fasta
            (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
    
    Job failed, going on with independent jobs.
    Exiting because a job execution failed. Look above for error message
    Complete log: /home/minion/lilo-test/.snakemake/log/2022-08-10T111227.425486.snakemake.log
    

    Kind regards, Elisabeth

    opened by el-mat 1
  • LILO with SLURM

    LILO with SLURM

    Hi there,

    I'm trying to run LILO on a SLURM HPC and I'm not sure what the errors are related to. Do you have an idea? It seems really environment depended, but maybe you stumbled across something similar.

    Call:

    snakemake -k -s [...]/tools/Lilo/LILO --configfile $CONFIG --profile [...]/tools/config-snippets/snake-cookies/slurm
    

    Log:

    [...]
    MissingOutputException in line 84 of [...]/tools/Lilo/LILO:
    Job Missing files after 30 seconds:
    FAR95540_pass_unclassified_7f618209_73/split/amplicon51.fastq
    This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
    Job id: 133673 completed successfully, but some output files are missing. 133673
    Trying to restart job 133673.
    [...]
    Error in rule assign:
        jobid: 133673
        output: FAR95540_pass_unclassified_7f618209_73/split/amplicon51.fastq
        shell:
            bedtools intersect -F 0.9 -wa -wb -bed -abam FAR95540_pass_unclassified_7f618209_73/alignments/reads_to_ref.bam -b amplicons.bed  | grep amplicon51 - | awk '{print $4}' - | seqtk subseq porechop/FAR95540_pass_unclassified_7f618209_73.fastq.gz - > FAR95540_pass_unclassified_7f618209_73/split/amplicon51.fastq
            (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
        cluster_jobid: 210115
    
    Error executing rule assign on cluster (jobid: 133673, external: 210115, jobscript: [...]/.snakemake/tmp.cssfeg5e/snakejob.assign.133673.sh). For error details see the cluster log and the log files of the involved rule(s).
    [...]
    Traceback (most recent call last):
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/__init__.py", line 701, in snakemake
        success = workflow.execute(
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/workflow.py", line 1077, in execute
        success = self.scheduler.schedule()
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/scheduler.py", line 441, in schedule
        self._error_jobs()
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/scheduler.py", line 557, in _error_jobs
        self._handle_error(job)
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/scheduler.py", line 615, in _handle_error
        self.running.remove(job)
    KeyError: assign
    

    I set --latency-wait 90 it again breaks after some time at a assign rule and a KeyError: read_select from the snakemake scheduler.

    Let me know which input/config files might be interesting to solve this. :)

    opened by MarieLataretu 7
Releases(v0.2)
Owner
Amanda Warr
Amanda Warr
The lastest all in one bombing tool coded in python uses tbomb api

BaapG-Attack is a python3 based script which is officially made for linux based distro . It is inbuit mass bomber with sms, mail, calls and many more bombing

59 Dec 25, 2022
Data Analytics on Genomes and Genetics

Data Analytics performed on On genomes and Genetics dataset to predict genetic disorder and disorder subclass. DONE by TEAM SIGMA!

1 Jan 12, 2022
A DSL for data-driven computational pipelines

"Dataflow variables are spectacularly expressive in concurrent programming" Henri E. Bal , Jennifer G. Steiner , Andrew S. Tanenbaum Quick overview Ne

1.9k Jan 03, 2023
Aggregating gridded data (xarray) to polygons

A package to aggregate gridded data in xarray to polygons in geopandas using area-weighting from the relative area overlaps between pixels and polygons. Check out the binder link above for a sample c

Kevin Schwarzwald 42 Nov 09, 2022
Top 50 best selling books on amazon

It's a dashboard that shows the detailed information about each book in the top 50 best selling books on amazon over the last ten years

Nahla Tarek 1 Nov 18, 2021
Useful tool for inserting DataFrames into the Excel sheet.

PyCellFrame Insert Pandas DataFrames into the Excel sheet with a bunch of conditions Install pip install pycellframe Usage Examples Let's suppose that

Luka Sosiashvili 1 Feb 16, 2022
A notebook to analyze Amazon Recommendation Review Dataset.

Amazon Recommendation Review Dataset Analyzer A notebook to analyze Amazon Recommendation Review Dataset. Features Calculates distinct user count, dis

isleki 3 Aug 22, 2022
Employee Turnover Analysis

Employee Turnover Analysis Submission to the DataCamp competition "Can you help reduce employee turnover?"

Jannik Wiedenhaupt 1 Feb 13, 2022
MoRecon - A tool for reconstructing missing frames in motion capture data.

MoRecon - A tool for reconstructing missing frames in motion capture data.

Yuki Nishidate 38 Dec 03, 2022
Integrate bus data from a variety of sources (batch processing and real time processing).

Purpose: This is integrate bus data from a variety of sources such as: csv, json api, sensor data ... into Relational Database (batch processing and r

1 Nov 25, 2021
Using approximate bayesian posteriors in deep nets for active learning

Bayesian Active Learning (BaaL) BaaL is an active learning library developed at ElementAI. This repository contains techniques and reusable components

ElementAI 687 Dec 25, 2022
This repo contains a simple but effective tool made using python which can be used for quality control in statistical approach.

This repo contains a powerful tool made using python which is used to visualize, analyse and finally assess the quality of the product depending upon the given observations

SasiVatsal 8 Oct 18, 2022
WaveFake: A Data Set to Facilitate Audio DeepFake Detection

WaveFake: A Data Set to Facilitate Audio DeepFake Detection This is the code repository for our NeurIPS 2021 (Track on Datasets and Benchmarks) paper

Chair for Sys­tems Se­cu­ri­ty 27 Dec 22, 2022
PyChemia, Python Framework for Materials Discovery and Design

PyChemia, Python Framework for Materials Discovery and Design PyChemia is an open-source Python Library for materials structural search. The purpose o

Materials Discovery Group 61 Oct 02, 2022
Stream-Kafka-ELK-Stack - Weather data streaming using Apache Kafka and Elastic Stack.

Streaming Data Pipeline - Kafka + ELK Stack Streaming weather data using Apache Kafka and Elastic Stack. Data source: https://openweathermap.org/api O

Felipe Demenech Vasconcelos 2 Jan 20, 2022
Using Data Science with Machine Learning techniques (ETL pipeline and ML pipeline) to classify received messages after disasters.

Using Data Science with Machine Learning techniques (ETL pipeline and ML pipeline) to classify received messages after disasters.

1 Feb 11, 2022
SNV calling pipeline developed explicitly to process individual or trio vcf files obtained from Illumina based pipeline (grch37/grch38).

SNV Pipeline SNV calling pipeline developed explicitly to process individual or trio vcf files obtained from Illumina based pipeline (grch37/grch38).

East Genomics 1 Nov 02, 2021
Example Of Splunk Search Query With Python And Splunk Python SDK

SSQAuto (Splunk Search Query Automation) Example Of Splunk Search Query With Python And Splunk Python SDK installation: ➜ ~ git clone https://github.c

AmirHoseinTangsiriNET 1 Nov 14, 2021
Titanic data analysis for python

Titanic-data-analysis This Repo is an analysis on Titanic_mod.csv This csv file contains some assumed data of the Titanic ship after sinking This full

Hardik Bhanot 1 Dec 26, 2021
Galvanalyser is a system for automatically storing data generated by battery cycling machines in a database

Galvanalyser is a system for automatically storing data generated by battery cycling machines in a database, using a set of "harvesters", whose job it

Battery Intelligence Lab 20 Sep 28, 2022