Combinatorial model of ligand-receptor binding

Overview

Combinatorial model of ligand-receptor binding

The binding of ligands to receptors is the starting point for many import signal pathways within a cell, but in contrast to the specificity of the processes that follow such bindings, the bindings themselves are often non-specific. Namely, a single type of ligand can often bind to multiple receptors beyond the single receptor to which it binds optimally. This property of ligand-receptor binding naturally leads to a simple question:

If a collection of ligands can bind non-specifically to a collection of receptors, but each ligand type has a specific receptor to which it binds most strongly, under what thermal conditions will all ligands bind to their optimal sites?


Depiction of various ligand types binding optimally and sub-optimally to receptors

In this repository, we collect all the simulations that helped us explore this question in the associated paper. In particular, to provide a conceptual handle on the features of optimal and sub-optimal bindings of ligands, we considered an analogous model of colors binding to a grid.


Partially correct and completely correct binding for the image

In the same way ligands could have certain receptors to which they bind optimally (even though such ligands could bind to many others), each colored square has a certain correct location in the image grid but could exist anywhere on the grid. We have the correct locations form a simple image so that when simulating the system it is clear by eye whether the system has settled into its completely correct configuration. In all of the notebooks in this repository, we use this system of grid assembly as a toy model to outline the properties of our ligand-receptor binding model.

Reproducing figures and tables

Each notebook reproduces a figure in the paper.

Simulation Scheme

For these simulations, we needed to define a microstate, the probability of transitions between microstates, and the types of transitions between microstates.

Microstate Definition

A microstate of our system was defined by two lists: one representing the collection of unbound particles, and the other representing particles bound to their various binding sites. The particles themselves were denoted by unique strings and came in multiple copies according to the system parameters. For example, a system with R = 3 types of particles with n1 = 2, n2 = 3, and n3 = 1 could have a microstate defined by unbound_particles = [A2, A2, A3] and bound_particles = [A1, −, A2, −, A1, −] where “−” in the bound list stands for an empty binding site.

Since the number of optimally bound particles was an important observable for the system, we also needed to define the optimal binding configuration for the microstates. Such an optimal configuration was chosen at the start of the simulation and was defined as a microstate with no unbound particles and all the bound particles in a particular order. For example, using the previous example, we might define the optimal binding configuration as optimal_bound_config = [A1, A1, A2, A2, A2, A3], in which case the number of optimally bound particles of each type in bound_particles = [A1,−,A2,−,A1,−] is m1 = 1, m2 = 1, and m3 = 0. The number of bound particles of each type is k_1 = 2, k_2 = 1, and k_3 = 0. We note that the order of the elements in unbound_particles is not physically important, but, since the number of optimally bound particles is an important observable, the order of the elements in bound_particles is physically important.

For these simulations, the energy of a microstate with k[i] bound particles of type i and m[i] optimally bound particles of type i was defined as

E(k, m) = Sum^R_i (m[i] log delta[i] + k[i] log gamma[i])

where k=[k1,k2,...,,kR] and m=[m1,m2,...,mR], gamma[i] is the binding affinity, and delta[i] is the optimal binding affinity of particle of type i. For transitioning between microstates, we allowed for three different transition types: Particle binding to a site; particle unbinding from a site; permutation of two particles in two different binding sites. Particle binding and unbinding both occur in real physical systems, but permutation of particle positions is unphysical. This latter transition type was included to ensure an efficient-in-time sampling of the state space. (Note: For simulations of equilibrium systems it is valid to include physically unrealistic transition types as long as the associated transition probabilities obey detailed balance.)

Transition Probability

At each time step, we randomly selected one of the three transition types with (equal probability for each type), then randomly selected the final proposed microstate given the initial microstate, and finally computed the probability that said proposal was accepted. By the Metropolis Hastings algorithm, the probability that the transition is accepted is given by

prob(init → fin) = min{1, exp(- β(Efin −Einit))*π(fin → init)/π(init → fin) }

where Einit is the energy of the initial microstate state and Efin is the energy of the final microstate. The quantity π(init → fin) is the probability of randomly proposing the final microstate state given the initial microstate state and π(fin → init) is defined similarly. The ratio π(fin → init)/π(init → fin) varied for each transition type. Below we give examples of these transitions along with the value of this ratio in each case. In the following, Nf and Nb represent the number of free particles and the number of bound particles, respectively, before the transition.

Types of Transitions

  • Particle Binding to Site: One particle was randomly chosen from the unbound_particles list and placed in a randomly chosen empty site in the bound_particles list. π(fin → init)/π(init → fin) = Nf^2/(Nb +1).

Example: unbound_particles = [A2, A2, A3] and bound_particles = [A1, −, A2, −, A1, −]unbound_particles = [A2, A3] and bound_particles = [A1, A2, A2, −, A1, −]; π(fin → init)/π(init → fin) = 9/4

  • Particle Unbinding from Site: One particle was randomly chosen from the bound_particles list and placed in the unbound_particles list. π(fin → init)/π(init → fin) = Nb/(Nf + 1)^2.

Example: unbound_particles = [A2, A2, A3] and bound_particles = [A1, −, A2, −, A1, −]unbound_particles = [A2, A2, A3, A2] and bound_particles = [A1, −, −, −, A1, −]; π(fin → init)/π(init → fin) = 3/16

  • Particle Permutation: Two randomly selected particles in the bound_particles list switched positions. π(fin → init)/π(init → fin) = 1.

Example: unbound_particles = [A2, A2, A3] and bound_particles = [A1, −, A2, −, A1, −]unbound_particles = [A2, A2, A3] and bound_particles = [A2, −, A1, −, A1, −]; π(fin → init)/π(init → fin) = 1

For impossible transitions (e.g., particle binding when there are no free particles) the probability for accepting the transition was set to zero. At each temperature, the simulation was run for anywhere from 10,000 to 30,000 time steps (depending on convergence properties), of which the last 2.5% of steps were used to compute ensemble averages of ⟨k⟩ and ⟨m⟩. These simulations were repeated five times, and each point in Fig. 6b, Fig. 7b, Fig. 8b, and Fig. 9 in the paper represents the average ⟨k⟩ and ⟨m⟩ over these five runs.

References

[1] Mobolaji Williams. "Combinatorial model of ligand-receptor binding." 2022. [http://arxiv.org/abs/2201.09471]


@article{williams2022comb,
  title={Combinatorial model of ligand-receptor binding},
  author={Williams, Mobolaji},
  journal={arXiv preprint arXiv:2201.09471},
  year={2022}
}
Owner
Mobolaji Williams
Mobolaji Williams
An Abstract Cyber Security Simulation and Markov Game for OpenAI Gym

gym-idsgame An Abstract Cyber Security Simulation and Markov Game for OpenAI Gym gym-idsgame is a reinforcement learning environment for simulating at

Kim Hammar 29 Dec 03, 2022
A Haskell kernel for IPython.

IHaskell You can now try IHaskell directly in your browser at CoCalc or mybinder.org. Alternatively, watch a talk and demo showing off IHaskell featur

Andrew Gibiansky 2.4k Dec 29, 2022
This repo in the implementation of EMNLP'21 paper "SPARQLing Database Queries from Intermediate Question Decompositions" by Irina Saparina, Anton Osokin

SPARQLing Database Queries from Intermediate Question Decompositions This repo is the implementation of the following paper: SPARQLing Database Querie

Yandex Research 20 Dec 19, 2022
Tutorial page of the Climate Hack, the greatest hackathon ever

Tutorial page of the Climate Hack, the greatest hackathon ever

UCL Artificial Intelligence Society 12 Jul 02, 2022
The official implementation of VAENAR-TTS, a VAE based non-autoregressive TTS model.

VAENAR-TTS This repo contains code accompanying the paper "VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis". Sa

THUHCSI 138 Oct 28, 2022
Explaining Hyperparameter Optimization via PDPs

Explaining Hyperparameter Optimization via PDPs This repository gives access to an implementation of the methods presented in the paper submission “Ex

2 Nov 16, 2022
StyleSwin: Transformer-based GAN for High-resolution Image Generation

StyleSwin This repo is the official implementation of "StyleSwin: Transformer-based GAN for High-resolution Image Generation". By Bowen Zhang, Shuyang

Microsoft 349 Dec 28, 2022
Newt - a Gaussian process library in JAX.

Newt __ \/_ (' \`\ _\, \ \\/ /`\/\ \\ \ \\

AaltoML 0 Nov 02, 2021
This repository contains python code necessary to replicated the experiments performed in our paper "Invariant Ancestry Search"

InvariantAncestrySearch This repository contains python code necessary to replicated the experiments performed in our paper "Invariant Ancestry Search

Phillip Bredahl Mogensen 0 Feb 02, 2022
A self-supervised learning framework for audio-visual speech

AV-HuBERT (Audio-Visual Hidden Unit BERT) Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction Robust Self-Supervised A

Meta Research 431 Jan 07, 2023
An implementation of Deep Forest 2021.2.1.

Deep Forest (DF) 21 DF21 is an implementation of Deep Forest 2021.2.1. It is designed to have the following advantages: Powerful: Better accuracy than

LAMDA Group, Nanjing University 795 Jan 03, 2023
Hippocampal segmentation using the UNet network for each axis

Hipposeg Hippocampal segmentation using the UNet network for each axis, inspired by https://github.com/MICLab-Unicamp/e2dhipseg Red: False Positive Gr

Juan Carlos Aguirre Arango 0 Sep 02, 2021
TransMorph: Transformer for Medical Image Registration

TransMorph: Transformer for Medical Image Registration keywords: Vision Transformer, Swin Transformer, convolutional neural networks, image registrati

Junyu Chen 180 Jan 07, 2023
An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities.

Playground for CLIP-like models Demo Colab Link GradCAM Visualization Naive Zero-shot Detection Smarter Zero-shot Detection Captcha Solver Changelog 2

Kevin Zakka 101 Dec 30, 2022
DCSAU-Net: A Deeper and More Compact Split-Attention U-Net for Medical Image Segmentation

DCSAU-Net: A Deeper and More Compact Split-Attention U-Net for Medical Image Segmentation By Qing Xu, Wenting Duan and Na He Requirements pytorch==1.1

Qing Xu 20 Dec 09, 2022
Data and code for ICCV 2021 paper Distant Supervision for Scene Graph Generation.

Distant Supervision for Scene Graph Generation Data and code for ICCV 2021 paper Distant Supervision for Scene Graph Generation. Introduction The pape

THUNLP 23 Dec 31, 2022
Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue

Realtime Unsupervised Depth Estimation from an Image This is the caffe implementation of our paper "Unsupervised CNN for single view depth estimation:

Ravi Garg 227 Nov 28, 2022
StyleGAN2-ada for practice

This version of the newest PyTorch-based StyleGAN2-ada is intended mostly for fellow artists, who rarely look at scientific metrics, but rather need a working creative tool. Tested on Python 3.7 + Py

vadim epstein 170 Nov 16, 2022
Styled Augmented Translation

SAT Style Augmented Translation Introduction By collecting high-quality data, we were able to train a model that outperforms Google Translate on 6 dif

139 Dec 29, 2022
Offcial repository for the IEEE ICRA 2021 paper Auto-Tuned Sim-to-Real Transfer.

Offcial repository for the IEEE ICRA 2021 paper Auto-Tuned Sim-to-Real Transfer.

47 Jun 30, 2022