GT4SD, an open-source library to accelerate hypothesis generation in the scientific discovery process.

Overview

GT4SD (Generative Toolkit for Scientific Discovery)

License: MIT Code style: black Contributions

logo

The GT4SD (Generative Toolkit for Scientific Discovery) is an open-source platform to accelerate hypothesis generation in the scientific discovery process. It provides a library for making state-of-the-art generative AI models easier to use.

Installation

pip

You can install gt4sd directly from GitHub:

pip install git+https://github.com/GT4SD/gt4sd-core

Development setup & installation

If you would like to contribute to the package, we recommend the following development setup: Clone the gt4sd-core repository:

git clone [email protected]:GT4SD/gt4sd-core.git
cd gt4ds-core
conda env create -f conda.yml
conda activate gt4sd
pip install -e .

Learn more in CONTRIBUTING.md

Supported packages

Beyond implementing various generative modeling inference and training pipelines GT4SD is designed to provide a high-level API that implement an harmonized interface for several existing packages:

  • GuacaMol: inference pipelines for the baselines models.
  • MOSES: inference pipelines for the baselines models.
  • TAPE: encoder modules compatible with the protein language models.
  • PaccMann: inference pipelines for all algorithms of the PaccMann family as well as traiing pipelines for the generative VAEs.
  • transformers: training and inference pipelines for generative models from the HuggingFace Models

Using GT4SD

Running inference pipelines

Running an algorithm is as easy as typing:

from gt4sd.algorithms.conditional_generation.paccmann_rl.core import (
    PaccMannRLProteinBasedGenerator, PaccMannRL
)
target = 'MVLSPADKTNVKAAWGKVGAHAGEYGAEALERMFLSFPTT'
# algorithm configuration with default parameters
configuration = PaccMannRLProteinBasedGenerator()
# instantiate the algorithm for sampling
algorithm = PaccMannRL(configuration=configuration, target=target)
items = list(algorithm.sample(10))
print(items)

Or you can use the ApplicationRegistry to run an algorithm instance using a serialized representation of the algorithm:

from gt4sd.algorithms.registry import ApplicationsRegistry
target = 'MVLSPADKTNVKAAWGKVGAHAGEYGAEALERMFLSFPTT'
algorithm = ApplicationsRegistry.get_application_instance(
    target=target,
    algorithm_type='conditional_generation',
    domain='materials',
    algorithm_name='PaccMannRL',
    algorithm_application='PaccMannRLProteinBasedGenerator',
    generated_length=32,
    # include additional configuration parameters as **kwargs
)
items = list(algorithm.sample(10))
print(items)

Running training pipelines via the CLI command

GT4SD provides a trainer client based on the gt4sd-trainer CLI command. The trainer currently supports training pipelines for language modeling (language-modeling-trainer), PaccMann (paccmann-vae-trainer) and Granular (granular-trainer, multimodal compositional autoencoders).

$ gt4sd-trainer --help
usage: gt4sd-trainer [-h] --training_pipeline_name TRAINING_PIPELINE_NAME
                     [--configuration_file CONFIGURATION_FILE]

optional arguments:
  -h, --help            show this help message and exit
  --training_pipeline_name TRAINING_PIPELINE_NAME
                        Training type of the converted model, supported types:
                        granular-trainer, language-modeling-trainer, paccmann-
                        vae-trainer. (default: None)
  --configuration_file CONFIGURATION_FILE
                        Configuration file for the trainining. It can be used
                        to completely by-pass pipeline specific arguments.
                        (default: None)

To launch a training you have two options.

You can either specify the training pipeline and the path of a configuration file that contains the needed training parameters:

gt4sd-trainer  --training_pipeline_name ${TRAINING_PIPELINE_NAME} --configuration_file ${CONFIGURATION_FILE}

Or you can provide directly the needed parameters as argumentsL

gt4sd-trainer  --training_pipeline_name language-modeling-trainer --type mlm --model_name_or_path mlm --training_file /pah/to/train_file.jsonl --validation_file /path/to/valid_file.jsonl 

To get more info on a specific training pipeleins argument simply type:

gt4sd-trainer --training_pipeline_name ${TRAINING_PIPELINE_NAME} --help

References

If you use gt4sd in your projects, please consider citing the following:

@software{GT4SD,
author = {GT4SD Team},
month = {2},
title = {{GT4SD (Generative Toolkit for Scientific Discovery)}},
url = {https://github.com/GT4SD/gt4sd-core},
version = {main},
year = {2022}
}

License

The gt4sd codebase is under MIT license. For individual model usage, please refer to the model licenses found in the original packages.

Comments
  • cli-upload

    cli-upload

    cli-upload

    Add upload functionality to the command line. It gives the user the possibility to upload specific artifacts on a server.

    Given a specific version for an algorithm:

    • check if that version is already on the server: - check if the folder bucket/algorithm_type/algorithm_name/algorithm_application/version/ exists.
    • If yes, tell the user and stop the upload.
    • If not, upload all the files in that version.

    cli-upload relies on minio and has been tested locally using docker-compose. cli-upload can be used to upload on a cloud or local server.


    How to use cli-upload

    Following the example in the README (in the Saving a trained algorithm for inference via the CLI command section) and assuming a trained model in /tmp/test_cli_upload, run:

    gt4sd-upload --training_pipeline_name paccmann-vae-trainer --model_path /tmp/test_cli_upload --training_name fast-example --target_version fast-example-v0 --algorithm_application PaccMannGPGenerator

    opened by georgosgeorgos 15
  • MOSES VAE from Guacamol training reconstruction is

    MOSES VAE from Guacamol training reconstruction is "incorrect"

    Describe the bug The VAE in GT4SD uses the wrapper of the Moses VAE from Guacamol. Unfortunately, the decoding training step from the Moses VAE is bugged.

    More detail The problem arises from the definition of the forward_decoder method:

    def forward_decoder(self, x, z):
        lengths = [len(i_x) for i_x in x]
    
        x = nn.utils.rnn.pad_sequence(x, batch_first=True, padding_value=self.pad)
        x_emb = self.x_emb(x)
    
        z_0 = z.unsqueeze(1).repeat(1, x_emb.size(1), 1)
        x_input = torch.cat([x_emb, z_0], dim=-1)  # <--- PROBLEM 1
        x_input = nn.utils.rnn.pack_padded_sequence(x_input, lengths, batch_first=True)
    
        h_0 = self.decoder_lat(z)
        h_0 = h_0.unsqueeze(0).repeat(self.decoder_rnn.num_layers, 1, 1)
    
        output, _ = self.decoder_rnn(x_input, h_0)
    
        output, _ = nn.utils.rnn.pad_packed_sequence(output, batch_first=True)
        y = self.decoder_fc(output)
    
        recon_loss = F.cross_entropy(  # <--- PROBLEM 2
            y[:, :-1].contiguous().view(-1, y.size(-1)),
            x[:, 1:].contiguous().view(-1),
            ignore_index=self.pad
        )
    
        return recon_loss
    

    Namely, the reconstruction step is wrong in two spots:

    1. construction of the true input: x_input = torch.cat([x_emb, z_0], dim=-1) In the visual representation of a typical RNN, the true token feeds in from the 'bottom" of the cell and the previous hidden state from the "left". In this implementation, the reparameterized latent vector z is fed in both from the "left" (normal) and the "bottom" (atypical). Fix: this line should be removed
    2. calculation of the reconstruction loss: recon_loss = F.cross_entropy(...) This reconstruction loss is calculated as the per-token loss of the input batch (i.e., the mean of a batch of tokens) because the default reduction in F.cross_entropy is "mean". In turn, this results in reconstruction losses that are very low for the VAE, causing the optimizer to ignore the decoder and focus on the encoder. When a VAE focuses too hard on the encoder, you get mode collapse, and that's what happens with the Moses VAE. Fix: this line should be: F.cross_entropy(..., reduction="sum") / len(x)

    To reproduce

    1. Problem 1 is not a "problem" so much as it is highly atypical to structure a VAE like this. I can't say if it results in any actual problems, but it simply shouldn't be there
    2. Problem 2 can be observed with two experiments:
      1. Using PCA with two dimensions, plot the embeddings of a random batch z ~ q(z|x) and a sample from the standard normal distribution z ~ N(0, I). The embeddings from the encoder will look like a point at (0, 0) compared to the samples from the standard normal
      2. Measure the reconstruction accuracy x_r ~ p(x | z ~ q(z | x_0)). In a well-trained VAE, sum(x_r == x_0 for x_0 in xs) / len(xs) should be above 50%. This VAE is generally fairly low (in my experience).
    bug 
    opened by davidegraff 12
  • Improve CLA workflow

    Improve CLA workflow

    actions to commit to other peoples forks was not something super easy to do, so I'm settling for a bit more verbosity and automation.

    the issue will be closed with a comment to the commit that added the contributor. There is a notice to merge this into a PR.

    Therefore there is no assignment of the issue any more.

    Looks like this: https://github.com/C-nit/gt4sd-core/issues/9 and can also be triggered in a different way: https://github.com/C-nit/gt4sd-core/issues/11

    opened by C-nit 11
  • feat: Support in RT Trainer for multiple entities.

    feat: Support in RT Trainer for multiple entities.

    Solving #143 by expanding the Regression Transformer trainer to support multi-entity discriminations, i.e., support the multientity_cg collator from the RT repo.

    Signed-off-by: Nicolai Ree [email protected]

    opened by NicolaiRee 9
  • feat: property_predictors in scorer

    feat: property_predictors in scorer

    • Implement PropertyPredictorScorer in domains.materials.property_scorer. - circular import using domains.materials.scorer for the implementation
    • We are simply using the PropertyPredictorRegistry to select a property and parameters by name and PropertyPredictionScorer to compute a score on a sample wrt a target value.
    • Tests mimick the logic in properties.
    cla-signed 
    opened by georgosgeorgos 8
  • Training pipeline Regression Transformer

    Training pipeline Regression Transformer

    Adding new training pipeline for RT

    • allows to finetune existing models available in the toolkit
    • allows to train models from scratch
    • patching LRSchedulers in torchdrug --> they are needed for RT training and threw errors
    cla-signed 
    opened by jannisborn 6
  • Added toxicity and  affinity to visum notebook

    Added toxicity and affinity to visum notebook

    Signed-off-by: Eduardo [email protected] Added toxicity (Tox21 model from https://github.com/PaccMann/paccmann_sarscov2) and affinity (Paccmann predictor) to the notebook.

    @drugilsberg , I am not sure about one specific step in the notebook and I would really appreciate it if you could help: When calling the sample in PaccMannGP for the first time the first line of the output is

    configuring optimization for target: {'qed': {'weight': 1.0}, 'sa': {'weight': 1.0}}

    However, on the second call to the same object (no reinitialization), in section "Sampling and Plotting Molecules with GT4SD", the first line reads:

    configuring optimization for target: {'qed': {}, 'sa': {}}

    Do you know if this has any influence on the molecules being generated? I attached a PDF file with the output for convenience.

    visum-2022-handson-generative-models.pdf

    @helenaMontenegro , the notebook now requires users to download a small model, but I don't think this is a problem.

    cla-signed 
    opened by edux300 5
  • Problem multiprocess in requirements

    Problem multiprocess in requirements

    The new multiprocess library version (0.70.13) gives problems when installing gt4sd-core using the development mode. I had to set multiprocess==0.70.12.2 to install the library.

    opened by georgosgeorgos 5
  • Torchdrug trainer pipeline

    Torchdrug trainer pipeline

    Implemented torchdrug trainer pipeline. Models can be used via:

    gt4sd-trainer --training_pipeline_name torchdrug-gcpn-trainer -h
    gt4sd-trainer --training_pipeline_name torchdrug-graphaf-trainer -h
    

    Features:

    • [x] Support for the same two models that are available via inference TorchDrugGCPN and TorchDrugGraphAF.
    • [x] Both models can be trained on all MoleculeDatasets from torchdrug.Datasets. Those are around 20 predefined datasets.
    • [x] Implemented a custom dataset where users can pass their own data
    • [x] In addition to the unittests I verified functionalities from the CLI via gt4sd-trainer

    Problems:

    • [ ] Property optimization does not work, due to instabilities in TorchDrug. I opened issue and PR but we have to wait until they merge, release a new version and then bump our dependency. The code I wrote here already supports the property optimization but I disabled the unittest for the moment because it would fail due to the TorchDrug issue. See details: https://github.com/DeepGraphLearning/torchdrug/issues/83
    • [x] gt4sd-saving: I ran a test via CLI but the saving failed. Not sure how problematic this is, here's the error:
    INFO:gt4sd.cli.saving:Selected configuration: ConfigurationTuple(algorithm_type='generation', domain='materials', algorithm_name='TorchDrugGenerator', algorithm_application='TorchDrugGCPN')
    INFO:gt4sd.cli.saving:Saving model version "fast" with the following configuration: <class 'gt4sd.algorithms.generation.torchdrug.core.TorchDrugGCPN'>
    INFO:gt4sd.algorithms.core:TorchDrugGCPN can not save a version based on TorchDrugSavingArguments(model_path='/Users/jab/.gt4sd/runs/', training_name='gcpn_test')
    
    enhancement cla-signed 
    opened by jannisborn 5
  • RT sampling_wrapper to specify a substructure or series of tokens to keep unmasked

    RT sampling_wrapper to specify a substructure or series of tokens to keep unmasked

    I would like to propose an upgrade on the feature demonstrated in this notebook: https://github.com/GT4SD/gt4sd-core/blob/main/notebooks/regression-transformer-demo.ipynb (see cells 12-14)

    In addition to explicitly specifying tokens_to_mask, one probably could more likely imagine that a chemist might want to specify a substructure to mask or to "freeze" (keep unchanged, i.e. unmasked). It might be easier to specify tokens to freeze as that would be just selecting a part of the string to be kept unmasked. Prototype example is given below.

        sampling_wrapper={
            'property_goal': {
                '<logp>': 6.123,
                '<scs>': 1.5
            },
            'fraction_to_mask': 0.6,
            # keep morpholino tail unchanged
            'tokens_to_freeze': ['N4CCOCC4']
        }
    

    If one could specify substructure to freeze or to mask - that would be potentially even more advantageous, as that would remove ambiguities when a substructure can be expressed in more than one sequence.

        sampling_wrapper={
            'property_goal': {
                '<logp>': 6.123,
                '<scs>': 1.5
            },
            'fraction_to_mask': 0.6,
            # keep morpholino tail unchanged
            'substructure_to_freeze': ['N1CCOCC1'],
            # explicitly mask benzene ring moiety
            'substructure_to_mask':  ['C1=CC=CC=C1'],
        }
    

    One could use RDKit functionality to identify substructure tokens, as given here: https://www.rdkit.org/docs/Cookbook.html#substructure-matching

    Regarding the interpretation of the 'fraction_to_mask', I would then imagine that it would best applied to the remaining set of tokens (after tokens_to_freeze and explicit tokens_to_mask are excluded). I hope this makes sense, happy to clarify and exemplify further.

    enhancement 
    opened by OleinikovasV 4
  • Artifact storage for property predictors

    Artifact storage for property predictors

    Closes #116

    Now we can store artifacts also for property predictors

    • New property predictors are tested
    • One thing that remains to do is to have functions under gt4sd.properties.molecules.functions. Atm this is not yet supported since it would yield circular imports.
    cla-signed 
    opened by jannisborn 4
  • RT saving pipeline

    RT saving pipeline

    Closes #169

    • gt4sd-saving now also supports the RT training pipeline. I implemented the get_filepath_mappings_for_training_pipeline_arguments method. The inference.json is now created inside the RT trainer and also saved in the model folder such that it can later be copied by gt4sd-saving. The Property class was needed as a helper for this, to track some attributes of each property.
    • Expanded the RT example. Describes now a full process of training/finetuning a model, saving it with gt4sd-saving, running inference on it and finally uploading it to the model hub.

    I tested everything with the example from the README

    Minors:

    • adding a method filter_stubbed to the molecular RT that removes stub-like molecules("invalid SELFIES")
    • Bumping paccmann_gp dependency
    enhancement cla-signed 
    opened by jannisborn 0
  • RegressionTransformer saving pipeline

    RegressionTransformer saving pipeline

    Is your feature request related to a problem? Please describe. gt4sd-saving is not fully supportive of RT

    ToDo:

    • Implement get_filepath_mappings_for_training_pipeline_arguments
    • Save inference.json to model dir
    enhancement 
    opened by jannisborn 0
  • Disentangle properties from algorithms

    Disentangle properties from algorithms

    Is your feature request related to a problem? Please describe. Currently, the properties submodule imports stuff from algorithms.core and thus also from that __init__. In the init, we registry all the training pipelines and thus, one needs to have all those dependencies installed, including torchdrug, guacamol_baselines and other vcs-requirements

    Describe the solution you'd like Creating a submodule gt4sd.core that specifies base classes used by multiple submodules like gt4sd.algorithms or gt4sd.properties

    Describe alternatives you've considered Do the imports only when someone calls list_available_algorithms

    NOTE: When creating gt4sd.core we have to make sure that all the rest remains functional, including relative imports, jupyter notebooks (should be fine since we barely import from algorithms.core directly) and in particular also documentation

    enhancement 
    opened by jannisborn 0
  • Add methods for artifact-based property predictors

    Add methods for artifact-based property predictors

    Is your feature request related to a problem? Please describe. Currently the artifact-based property predictors (like gt4sd.properties.molecules.core.Tox21) are not usable as functions via gt4sd.properties.molecules.tox_21, unlike all the non-artifact-based properties). Moving the functions there would yield circula import issues

    Describe the solution you'd like A small refactor that goes around the circular imports

    enhancement 
    opened by jannisborn 0
  • Refactor AlgorithmConfiguration baseclass

    Refactor AlgorithmConfiguration baseclass

    Inconsistent types between AlgorithmConfiguration base class and the child ConfigurablePropertyAlgorithm Configuration, concerning attributes like domain but also methods like ensure_artifacts_for_version (class methods in the base class but instance methods in the base class).

    A simple refactor into 3 instead of 2 classes should fix this.

    Originally posted by @jannisborn in https://github.com/GT4SD/gt4sd-core/pull/121#discussion_r943649339

    • So the ones in the contstructor for lines like self.domain=domain says: error: Cannot assign to class variable "domain" via instance. That's because in the parent class (AlgorithmConfiguration) we set it as domain: ClassVar[str]
    • the ones in the signatures like get_application_prefix which returns a str are because in the parent class those are class methods, not instance methods. THe error is Signature of "get_application_prefix" incompatible with supertype "AlgorithmConfiguration

    It might be fixable by a refactor but I'm not sure it's worth it

    refactoring 
    opened by jannisborn 0
Releases(v1.0.4)
Owner
Generative Toolkit 4 Scientific Discovery
Generative Toolkit 4 Scientific Discovery
Official Implementation for Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation

Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation We present a generic image-to-image translation framework, pixel2style2pixel (pSp

2.8k Dec 30, 2022
This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of Coordinate Independent Convolutional Networks.

Orientation independent Möbius CNNs This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of

Maurice Weiler 59 Dec 09, 2022
Supporting code for the paper "Dangers of Bayesian Model Averaging under Covariate Shift"

Dangers of Bayesian Model Averaging under Covariate Shift This repository contains the code to reproduce the experiments in the paper Dangers of Bayes

Pavel Izmailov 25 Sep 21, 2022
Code release for "COTR: Correspondence Transformer for Matching Across Images"

COTR: Correspondence Transformer for Matching Across Images This repository contains the inference code for COTR. We plan to release the training code

UBC Computer Vision Group 360 Jan 06, 2023
[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

Xiefan Guo 122 Dec 11, 2022
Photographic Image Synthesis with Cascaded Refinement Networks - Pytorch Implementation

Photographic Image Synthesis with Cascaded Refinement Networks-Pytorch (https://arxiv.org/abs/1707.09405) This is a Pytorch implementation of cascaded

Soumya Tripathy 63 Mar 27, 2022
Predicting Axillary Lymph Node Metastasis in Early Breast Cancer Using Deep Learning on Primary Tumor Biopsy Slides

Predicting Axillary Lymph Node Metastasis in Early Breast Cancer Using Deep Learning on Primary Tumor Biopsy Slides Project | This repo is the officia

CVSM Group - email: <a href=[email protected]"> 33 Dec 28, 2022
Code release for General Greedy De-bias Learning

General Greedy De-bias for Dataset Biases This is an extention of "Greedy Gradient Ensemble for Robust Visual Question Answering" (ICCV 2021, Oral). T

4 Mar 15, 2022
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing

Anycost GAN video | paper | website Anycost GANs for Interactive Image Synthesis and Editing Ji Lin, Richard Zhang, Frieder Ganz, Song Han, Jun-Yan Zh

MIT HAN Lab 726 Dec 28, 2022
I explore rock vs. mine prediction using a SONAR dataset

I explore rock vs. mine prediction using a SONAR dataset. Using a Logistic Regression Model for my prediction algorithm, I intend on predicting what an object is based on supervised learning.

Jeff Shen 1 Jan 11, 2022
Time series annotation library.

CrowdCurio Time Series Annotator Library The CrowdCurio Time Series Annotation Library implements classification tasks for time series. Features Suppo

CrowdCurio 51 Sep 15, 2022
Automatically creates genre collections for your Plex media

Plex Auto Genres Plex Auto Genres is a simple script that will add genre collection tags to your media making it much easier to search for genre speci

Shane Israel 63 Dec 31, 2022
Probabilistic Tensor Decomposition of Neural Population Spiking Activity

Probabilistic Tensor Decomposition of Neural Population Spiking Activity Matlab (recommended) and Python (in developement) implementations of Soulat e

Hugo Soulat 6 Nov 30, 2022
An API-first distributed deployment system of deep learning models using timeseries data to analyze and predict systems behaviour

Gordo Building thousands of models with timeseries data to monitor systems. Table of content About Examples Install Uninstall Developer manual How to

Equinor 26 Dec 27, 2022
A set of simple scripts to process the Imagenet-1K dataset as TFRecords and make index files for NVIDIA DALI.

Overview This is a set of simple scripts to process the Imagenet-1K dataset as TFRecords and make index files for NVIDIA DALI. Make TFRecords To run t

8 Nov 01, 2022
Hierarchical Attentive Recurrent Tracking

Hierarchical Attentive Recurrent Tracking This is an official Tensorflow implementation of single object tracking in videos by using hierarchical atte

Adam Kosiorek 147 Aug 07, 2021
Advancing mathematics by guiding human intuition with AI

Advancing mathematics by guiding human intuition with AI This repo contains two colab notebooks which accompany the paper, available online at https:/

DeepMind 315 Dec 26, 2022
A simple implementation of Kalman filter in single object tracking

kalman-filter-in-single-object-tracking A simple implementation of Kalman filter in single object tracking https://www.bilibili.com/video/BV1Qf4y1J7D4

130 Dec 26, 2022
Official implementation of the Neurips 2021 paper Searching Parameterized AP Loss for Object Detection.

Parameterized AP Loss By Chenxin Tao, Zizhang Li, Xizhou Zhu, Gao Huang, Yong Liu, Jifeng Dai This is the official implementation of the Neurips 2021

46 Jul 06, 2022