ULMFiT for Genomic Sequence Data

Overview

Genomic ULMFiT

This is an implementation of ULMFiT for genomics classification using Pytorch and Fastai. The model architecture used is based on the AWD-LSTM model, consisting of an embedding, three LSTM layers, and a final set of linear layers.

The ULMFiT approach uses three training phases to produce a classification model:

  1. Train a language model on a large, unlabeled corpus
  2. Fine tune the language model on the classification corpus
  3. Use the fine tuned language model to initialize a classification model

This method is particularly advantageous for genomic data, where large amounts of unlabeled data is abundant and labeled data is scarce. The ULMFiT approach allows us to train a model on a large, unlabeled genomic corpus in an unsupervised fashion. The pre-trained language model serves as a feature extractor for parsing genomic data.

Typical deep learning approaches to genomics classification are highly restricted to whatever labeled data is available. Models are usually trained from scratch on small datasets, leading to problems with overfitting. When unsupervised pre-training is used, it is typically done only on the classification dataset or on synthetically generated data. The Genomic-ULMFiT approach uses genome scale corpuses for pre-training to produce better feature extractors than we would get by training only on the classification corpus.

For a deep dive into the ULMFiT approach, model architectures, regularization and training strategies, see the Methods Long Form document in the Methods section.

Results

Performance of Genomic-ULMFiT relative to other methods

Promoter Classification

E. coli promoters

The Genomic-ULMFiT method performs well at the task of classifying promoter sequences from random sections of the genome. The process of unsupervised pre-training and fine-tuning has a clear impact on the performance of the classification model

Model Accuracy Precision Recall Correlation Coefficient
Naive 0.834 0.847 0.816 0.670
E. coli Genome Pre-Training 0.919 0.941 0.893 0.839
Genomic Ensemble Pre-Training 0.973 0.980 0.966 0.947

Data generation described in notebook

Notebook Directory

Classification performance on human promoters is competitive with published results

Human Promoters (short)

For the short promoter sequences, using data from Recognition of Prokaryotic and Eukaryotic Promoters using Convolutional Deep Learning Neural Networks:

Model DNA Size kmer/stride Accuracy Precision Recall Correlation Coefficient Specificity
Kh et al. -200/50 - - - 0.9 0.89 0.98
Naive Model -200/50 5/2 0.80 0.74 0.80 0.59 0.80
With Pre-Training -200/50 5/2 0.922 0.963 0.849 0.844 0.976
With Pre-Training and Fine Tuning -200/50 5/2 .977 .959 .989 .955 .969
With Pre-Training and Fine Tuning -200/50 5/1 .990 .983 .995 .981 .987
With Pre-Training and Fine Tuning -200/50 3/1 .995 .992 .996 .991 .994

Data Source

Notebook Directory

Human Promoters (long)

For the long promoter sequences, using data from PromID: Human Promoter Prediction by Deep Learning:

Model DNA Size Models Accuracy Precision Recall Correlation Coefficient
Umarov et al. -1000/500 2 Model Ensemble - 0.636 0.802 0.714
Umarov et al. -200/400 2 Model Ensemble - 0.769 0.755 0.762
Naive Model -500/500 Single Model 0.858 0.877 0.772 0.708
With Pre-Training -500/500 Single Model 0.888 0.90 0.824 0.770
With Pre-Training and Fine Tuning -500/500 Single Model 0.892 0.877 0.865 0.778

Data generation described in notebook

Notebook Directory

Other Bacterial Promoters

This table shows results on data from Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks. These results show how CNN based methods can sometimes perform better when training on small datasets.

Method Organism Training Examples Accuracy Precision Recall Correlation Coefficient Specificity
Kh et al. E. coli 2936 - - 0.90 0.84 0.96
Genomic-ULMFiT E. coli 2936 0.956 0.917 0.880 0.871 0.977
Kh et al. B. subtilis 1050 - - 0.91 0.86 0.95
Genomic-ULMFiT B. subtilis 1050 0.905 0.857 0.789 0.759 0.95

Data Source

Notebook Directory

Metaganomics Classification

Genomic-ULMFiT shows improved performance on the metagenomics taxonomic dataset from Deep learning models for bacteria taxonomic classification of metagenomic data.

Method Data Source Accuracy Precision Recall F1
Fiannaca et al. Amplicon .9137 .9162 .9137 .9126
Genomic-ULMFiT Amplicon .9239 .9402 .9332 .9306
Fiannaca et al. Shotgun .8550 .8570 .8520 .8511
Genomic-ULMFiT Shotgun .8797 .8824 .8769 .8758

Data Source

Notebook Directory

Enhancer Classification

When trained on a dataset of mammalian enhancer sequences from Enhancer Identification using Transfer and Adversarial Deep Learning of DNA Sequences, Genomic_ULMFiT improves on results from Cohn et al.

Model/ROC-AUC Human Mouse Dog Opossum
Cohn et al. 0.80 0.78 0.77 0.72
Genomic-ULMFiT 5-mer Stride 2 0.812 0.871 0.773 0.787
Genomic-ULMFiT 4-mer Stride 2 0.804 0.876 0.771 0.786
Genomic-ULMFiT 3-mer Stride 1 0.819 0.875 0.788 0.798

Data Source

Notebook Directory

mRNA/lncRNA Classification

This table shows results for training a classification model on a dataset of coding mRNA sequences and long noncoding RNA (lncRNA) sequences. The dataset comes from A deep recurrent neural network discovers complex biological rules to decipher RNA protein-coding potential by Hill et al. The dataset contains two test sets - a standard test set and a challenge test set.

Model Test Set Accuracy Specificity Sensitivity Precision MCC
GRU Ensemble (Hill et al.)* Standard Test Set 0.96 0.97 0.95 0.97 0.92
Genomic ULMFiT (3mer stride 1) Standard Test Set 0.963 0.952 0.974 0.953 0.926
GRU Ensemble (Hill et al.)* Challenge Test Set 0.875 0.95 0.80 0.95 0.75
Genomic ULMFiT (3mer stride 1) Challenge Test Set 0.90 0.944 0.871 0.939 0.817

(*) Hill et al. presented their results as a plot rather than as a data table. Values in the above table are estimated by reading off the plot

Data Source

Notebook Directory

Interpreting Results

One way to gain insight into how the classification model makes decisions is to perturb regions of a given input sequence to see how changing different regions of the sequence impact the classification result. This allows us to create plots like the one below, highlighting important sequence regions for classification. In the plot below, the red line corresponds to a true transcription start site. The plot shows how prediction results are sensitive to changes around that location. More detail on interpretations can be found in the Model Interpretations directory.

Long Sequence Inference

Inference on long, unlabeled sequences can be done by breaking the input sequence into chunks and plotting prediction results as a function of length. The image below shows a sample prediction of promoter locations on a 40,000 bp region of the E. coli genome. True promoter locations are shown in red. More detail can be found in this notebook

Relevant Literature

For a comparison to other published methods, see Section 6 of the Methods notebook. Here are some relevant papers in the deep genomics classification space.

DeepCRISPR: optimized CRISPR guide RNA design by deep learning

Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks

PromID: human promoter prediction by deep learning

Deep Learning for Genomics: A Concise Overview

Prediction of deleterious mutations in coding regions of mammals with transfer learning

Enhancer Identification using Transfer and Adversarial Deep Learning of DNA Sequences

PEDLA: predicting enhancers with a deep learning-based algorithmic framework

Predicting enhancers with deep convolutional neural networks

BiRen: predicting enhancers with a deep-learning-based model using the DNA sequence alone

Deep learning models for bacteria taxonomic classification of metagenomic data

Prediction of enhancer-promoter interactions via natural language processing

A deep recurrent neural network discovers complex biological rules to decipher RNA protein-coding potential

Recurrent Neural Network for Predicting Transcription Factor Binding Sites

Learning the Language of the Genome using RNNs

Owner
Karl
Interested in anything related to deep learning, biotech, energy, materials
Karl
Lecture materials for Cornell CS5785 Applied Machine Learning (Fall 2021)

Applied Machine Learning (Cornell CS5785, Fall 2021) This repo contains executable course notes and slides for the Applied ML course at Cornell and Co

Volodymyr Kuleshov 103 Dec 31, 2022
Decentralized Reinforcment Learning: Global Decision-Making via Local Economic Transactions (ICML 2020)

Decentralized Reinforcement Learning This is the code complementing the paper Decentralized Reinforcment Learning: Global Decision-Making via Local Ec

40 Oct 30, 2022
Source Code for AAAI 2022 paper "Graph Convolutional Networks with Dual Message Passing for Subgraph Isomorphism Counting and Matching"

Graph Convolutional Networks with Dual Message Passing for Subgraph Isomorphism Counting and Matching This repository is an official implementation of

HKUST-KnowComp 13 Sep 08, 2022
[ICRA2021] Reconstructing Interactive 3D Scene by Panoptic Mapping and CAD Model Alignment

Interactive Scene Reconstruction Project Page | Paper This repository contains the implementation of our ICRA2021 paper Reconstructing Interactive 3D

97 Dec 28, 2022
[CVPR 2022] Unsupervised Image-to-Image Translation with Generative Prior

GP-UNIT - Official PyTorch Implementation This repository provides the official PyTorch implementation for the following paper: Unsupervised Image-to-

Shuai Yang 125 Jan 03, 2023
A project that uses optical flow and machine learning to detect aimhacking in video clips.

waldo-anticheat A project that aims to use optical flow and machine learning to visually detect cheating or hacking in video clips from fps games. Che

waldo.vision 542 Dec 03, 2022
DeepLearning Anomalies Detection with Bluetooth Sensor Data

Final Year Project. Constructing models to create offline anomalies detection using Travel Time Data collected from Bluetooth sensors along the route.

1 Jan 10, 2022
Contains supplementary materials for reproduce results in HMC divergence time estimation manuscript

Scalable Bayesian divergence time estimation with ratio transformations This repository contains the instructions and files to reproduce the analyses

Suchard Research Group 1 Sep 21, 2022
Official implementation for the paper: Multi-label Classification with Partial Annotations using Class-aware Selective Loss

Multi-label Classification with Partial Annotations using Class-aware Selective Loss Paper | Pretrained models Official PyTorch Implementation Emanuel

99 Dec 27, 2022
mlpack: a scalable C++ machine learning library --

a fast, flexible machine learning library Home | Documentation | Doxygen | Community | Help | IRC Chat Download: current stable version (3.4.2) mlpack

mlpack 4.2k Jan 09, 2023
Code for CVPR2019 Towards Natural and Accurate Future Motion Prediction of Humans and Animals

Motion prediction with Hierarchical Motion Recurrent Network Introduction This work concerns motion prediction of articulate objects such as human, fi

Shuang Wu 85 Dec 11, 2022
Python version of the amazing Reaction Mechanism Generator (RMG).

Reaction Mechanism Generator (RMG) Description This repository contains the Python version of Reaction Mechanism Generator (RMG), a tool for automatic

Reaction Mechanism Generator 284 Dec 27, 2022
HDMapNet: A Local Semantic Map Learning and Evaluation Framework

HDMapNet_devkit Devkit for HDMapNet. HDMapNet: A Local Semantic Map Learning and Evaluation Framework Qi Li, Yue Wang, Yilun Wang, Hang Zhao [Paper] [

Tsinghua MARS Lab 421 Jan 04, 2023
The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL), NeurIPS-2021

Directed Graph Contrastive Learning The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL). In this paper, we present the first con

Tong Zekun 28 Jan 08, 2023
A Java implementation of the experiments for the paper "k-Center Clustering with Outliers in Sliding Windows"

OutliersSlidingWindows A Java implementation of the experiments for the paper "k-Center Clustering with Outliers in Sliding Windows" Dataset generatio

PaoloPellizzoni 0 Jan 05, 2022
quantize aware training package for NCNN on pytorch

ncnnqat ncnnqat is a quantize aware training package for NCNN on pytorch. Table of Contents ncnnqat Table of Contents Installation Usage Code Examples

62 Nov 23, 2022
Experiments and examples converting Transformers to ONNX

Experiments and examples converting Transformers to ONNX This repository containes experiments and examples on converting different Transformers to ON

Philipp Schmid 4 Dec 24, 2022
Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)

This is a playground for pytorch beginners, which contains predefined models on popular dataset. Currently we support mnist, svhn cifar10, cifar100 st

Aaron Chen 2.4k Dec 28, 2022
Transfer style api - An API to use with Tranfer Style App, where you can use two image and transfer the style

Transfer Style API It's an API to use with Tranfer Style App, where you can use

Brian Alejandro 1 Feb 13, 2022
Implementation of CSRL from the AAAI2022 paper: Constraint Sampling Reinforcement Learning: Incorporating Expertise For Faster Learning

CSRL Implementation of CSRL from the AAAI2022 paper: Constraint Sampling Reinforcement Learning: Incorporating Expertise For Faster Learning Python: 3

4 Apr 14, 2022