Efficient and Scalable Physics-Informed Deep Learning and Scientific Machine Learning on top of Tensorflow for multi-worker distributed computing

Overview

TensorDiffEq logo

Package Build Package Release pypi downloads python versions

Notice: Support for Python 3.6 will be dropped in v.0.2.1, please plan accordingly!

Efficient and Scalable Physics-Informed Deep Learning

Collocation-based PINN PDE solvers for prediction and discovery methods on top of Tensorflow 2.X for multi-worker distributed computing.

Use TensorDiffEq if you require:

  • A meshless PINN solver that can distribute over multiple workers (GPUs) for forward problems (inference) and inverse problems (discovery)
  • Scalable domains - Iterated solver construction allows for N-D spatio-temporal support
    • support for N-D spatial domains with no time element is included
  • Self-Adaptive Collocation methods for forward and inverse PINNs
  • Intuitive user interface allowing for explicit definitions of variable domains, boundary conditions, initial conditions, and strong-form PDEs

What makes TensorDiffEq different?

  • Completely open-source

  • Self-Adaptive Solvers for forward and inverse problems, leading to increased accuracy of the solution and stability in training, resulting in less overall training time

  • Multi-GPU distributed training for large or fine-grain spatio-temporal domains

  • Built on top of Tensorflow 2.0 for increased support in new functionality exclusive to recent TF releases, such as XLA support, autograph for efficent graph-building, and grappler support for graph optimization* - with no chance of the source code being sunset in a further Tensorflow version release

  • Intuitive interface - defining domains, BCs, ICs, and strong-form PDEs in "plain english"

*In development

If you use TensorDiffEq in your work, please cite it via:

@article{mcclenny2021tensordiffeq,
  title={TensorDiffEq: Scalable Multi-GPU Forward and Inverse Solvers for Physics Informed Neural Networks},
  author={McClenny, Levi D and Haile, Mulugeta A and Braga-Neto, Ulisses M},
  journal={arXiv preprint arXiv:2103.16034},
  year={2021}
}

Thanks to our additional contributors:

@marcelodallaqua, @ragusa, @emiliocoutinho

Comments
  • Latest version of package

    Latest version of package

    The examples in the doc use the latest code of master branch but the library on Pypi is still the version in May. Can you build the lib and update the version on Pypi?

    opened by devzhk 5
  • ADAM training on batches

    ADAM training on batches

    It is possible to define a batch size and this will be applied to the calculation of the residual loss function, in splitting the collocation points in batches during the training.

    opened by emiliocoutinho 3
  • Pull Request using PyCharm

    Pull Request using PyCharm

    Dear Levi,

    I tried to make a Pull Request on this repository using PyCharm, and I received the following message:

    Although you appear to have the correct authorization credentials, the tensordiffeq organization has enabled OAuth App access restrictions, meaning that data access to third-parties is limited. For more information on these restrictions, including how to whitelist this app, visit https://help.github.com/articles/restricting-access-to-your-organization-s-data/

    I would kindly ask you to authorize PyCharm to access your organization data to use the GUI to make future pull requests.

    Best Regards

    opened by emiliocoutinho 1
  • Update method def get_sizes of utils.py

    Update method def get_sizes of utils.py

    Fix bug on the method def get_sizes(layer_sizes) of utils.py. The method was only allowing neural nets with an identical number of nodes in each hidden layer. Which was making the L- BFGS optimization to crash.

    opened by marcelodallaqua 1
  • model.save ?

    model.save ?

    Sometimes, it's useful to save the model for later use. I couldn't find a .save method and pickle (and dill) didn't let me dump the object for later re-use. (example of error with pickle: Can't pickle local object 'make_gradient_clipnorm_fn..').

    Is it currently possible to save the model? Thanks!

    opened by ragusa 1
  • add model.save and model.load_model

    add model.save and model.load_model

    Add model.save and model.load_model to CollocationSolverND class ref #3

    Will be released in the next stable.

    currently this can be done by using the Keras integration via running model.u_model.save("path/to/file"). This change will allow a direct save by calling model.save() on the CollocationSolverND class. Same with load_model().

    The docs will be updated to reflect this change.

    opened by levimcclenny 0
  • 2D Burgers Equation

    2D Burgers Equation

    Hello @levimcclenny and thanks for recommending this library!

    I have modified the 1D burger example to be in 2D, but I did not get good comparison results. Any suggestions?

    import math
    import scipy.io
    import tensordiffeq as tdq
    from tensordiffeq.boundaries import *
    from tensordiffeq.models import CollocationSolverND
    
    Domain = DomainND(["x", "y", "t"], time_var='t')
    
    Domain.add("x", [-1.0, 1.0], 256)
    Domain.add("y", [-1.0, 1.0], 256)
    Domain.add("t", [0.0, 1.0], 100)
    
    N_f = 10000
    Domain.generate_collocation_points(N_f)
    
    
    def func_ic(x,y):
        p =2
        q =1
        return np.sin (p * math.pi * x) * np.sin(q * math.pi * y)
        
    
    init = IC(Domain, [func_ic], var=[['x','y']])
    upper_x = dirichletBC(Domain, val=0.0, var='x', target="upper")
    lower_x = dirichletBC(Domain, val=0.0, var='x', target="lower")
    upper_y = dirichletBC(Domain, val=0.0, var='y', target="upper")
    lower_y = dirichletBC(Domain, val=0.0, var='y', target="lower")
    
    BCs = [init, upper_x, lower_x, upper_y, lower_y]
    
    
    def f_model(u_model, x, y, t):
        u = u_model(tf.concat([x, y, t], 1))
        u_x = tf.gradients(u, x)
        u_xx = tf.gradients(u_x, x)
        u_y = tf.gradients(u, y)
        u_yy = tf.gradients(u_y, y)
        u_t = tf.gradients(u, t)
        f_u = u_t + u * (u_x + u_y) - (0.01 / tf.constant(math.pi)) * (u_xx+u_yy)
        return f_u
    
    
    layer_sizes = [3, 20, 20, 20, 20, 20, 20, 20, 20, 1]
    
    model = CollocationSolverND()
    model.compile(layer_sizes, f_model, Domain, BCs)
    
    # to reproduce results from Raissi and the SA-PINNs paper, train for 10k newton and 10k adam
    model.fit(tf_iter=10000, newton_iter=10000)
    
    model.save("burger2D_Training_Model")
    #model.load("burger2D_Training_Model")
    
    #######################################################
    #################### PLOTTING #########################
    #######################################################
    
    data = np.load('py-pde_2D_burger_data.npz')
    
    Exact = data['u_output']
    Exact_u = np.real(Exact)
    
    x = Domain.domaindict[0]['xlinspace']
    y = Domain.domaindict[1]['ylinspace']
    t = Domain.domaindict[2]["tlinspace"]
    
    X, Y, T = np.meshgrid(x, y, t)
    
    X_star = np.hstack((X.flatten()[:, None], Y.flatten()[:, None], T.flatten()[:, None]))
    u_star = Exact_u.T.flatten()[:, None]
    
    u_pred, f_u_pred = model.predict(X_star)
    
    error_u = tdq.helpers.find_L2_error(u_pred, u_star)
    print('Error u: %e' % (error_u))
    
    lb = np.array([-1.0, -1.0, 0.0])
    ub = np.array([1.0, 1.0, 1])
    
    tdq.plotting.plot_solution_domain2D(model, [x, y, t], ub=ub, lb=lb, Exact_u=Exact_u.T)
    
    
    Screen Shot 2022-03-04 at 11 15 31 PM Screen Shot 2022-03-04 at 11 15 44 PM Screen Shot 2022-03-04 at 11 15 18 PM
    opened by engsbk 3
  • 2D Wave Equation

    2D Wave Equation

    Thank you for the great contribution!

    I'm trying to extend the 1D example problems to 2D, but I want to make sure my changes are in the correct place:

    1. Dimension variables. I changed them like so:

    Domain = DomainND(["x", "y", "t"], time_var='t')

    Domain.add("x", [0.0, 5.0], 100) Domain.add("y", [0.0, 5.0], 100) Domain.add("t", [0.0, 5.0], 100)

    1. My IC is zero, but for the BCs I'm not sure how to define the left and right borders, please let me know if my implementation is correct:
    
    def func_ic(x,y):
        return 0
    
    init = IC(Domain, [func_ic], var=[['x','y']])
    upper_x = dirichletBC(Domain, val=0.0, var='x', target="upper")
    lower_x = dirichletBC(Domain, val=0.0, var='x', target="lower")
    upper_y = dirichletBC(Domain, val=0.0, var='y', target="upper")
    lower_y = dirichletBC(Domain, val=0.0, var='y', target="lower")
            
    BCs = [init, upper_x, lower_x, upper_y, lower_y]
    

    All of my BCs and ICs are zero. And my equation has a (forcing) time-dependent source term as such:

    
    def f_model(u_model, x, y, t):
        c = tf.constant(1, dtype = tf.float32)
        Amp = tf.constant(2, dtype = tf.float32)
        freq = tf.constant(1, dtype = tf.float32)
        sigma = tf.constant(0.2, dtype = tf.float32)
    
        source_x = tf.constant(0.5, dtype = tf.float32)
        source_y = tf.constant(2.5, dtype = tf.float32)
    
        GP = Amp * tf.exp(-0.5*( ((x-source_x)/sigma)**2 + ((y-source_y)/sigma)**2 ))
        
        S = GP * tf.sin( 2 * tf.constant(math.pi)  * freq * t )
        u = u_model(tf.concat([x,y,t], 1))
        u_x = tf.gradients(u,x)
        u_xx = tf.gradients(u_x, x)
        u_y = tf.gradients(u,y)
        u_yy = tf.gradients(u_y, y)
        u_t = tf.gradients(u,t)
        u_tt = tf.gradients(u_t,t)
    
    
        f_u = u_xx + u_yy - (1/c**2) * u_tt + S
        
        return f_u
    

    Please advise.

    Looking forward to your reply!

    opened by engsbk 13
  • Reproducibility

    Reproducibility

    Dear @levimcclenny,

    Have you considered in adapt TensorDiffEq to be deterministic? In the way the code is implemented, we can find two sources of randomness:

    • The function Domain.generate_collocation_points has a random number generation
    • The TensorFlow training procedure (weights initialization and possibility of the use o random batches)

    Both sources of randomness can be solved with not much effort. We can define a random state for the first one that can be passed to the function Domain.generate_collocation_points. For the second, we can use the implementation provided on Framework Determinism. I have used the procedures suggested by this code, and the results of TensorFlow are always reproducible (CPU or GPU, serial or distributed).

    If you want, I can implement these two features.

    Best Regards

    opened by emiliocoutinho 3
Releases(v0.2.0)
Owner
tensordiffeq
Scalable PINN solvers for PDE Inference and Discovery
tensordiffeq
Numerai tournament example scripts using NN and optuna

numerai_NN_example Numerai tournament example scripts using pytorch NN, lightGBM and optuna https://numer.ai/tournament Performance of my model based

Takahiro Maeda 12 Oct 10, 2022
This repo implements a 3D segmentation task for an airport baggage dataset.

3D CT Scan Segmentation With Occupancy Network This repo implements a 3D superresolution segmentation task for an airport baggage dataset. Our final p

Christoph Reich 2 Mar 28, 2022
Deep-Learning-Image-Captioning - Implementing convolutional and recurrent neural networks in Keras to generate sentence descriptions of images

Deep Learning - Image Captioning with Convolutional and Recurrent Neural Nets ========================================================================

23 Apr 06, 2022
Use unsupervised and supervised learning to predict stocks

AIAlpha: Multilayer neural network architecture for stock return prediction This project is meant to be an advanced implementation of stacked neural n

Vivek Palaniappan 1.5k Dec 26, 2022
python 93% acc. CNN Dogs Vs Cats ( Pytorch )

English | 简体中文(测试中...敬请期待) Cnn-Classification-Dog-Vs-Cat 猫狗辨别 (pytorch版本) CNN Resnet18 的猫狗分类器,基于ResNet及其变体网路系列,对于一般的图像识别任务表现优异,模型精准度高达93%(小型样本)。 项目制作于

apple ye 1 May 22, 2022
Code for the paper: Hierarchical Reinforcement Learning With Timed Subgoals, published at NeurIPS 2021

Hierarchical reinforcement learning with Timed Subgoals (HiTS) This repository contains code for reproducing experiments from our paper "Hierarchical

Autonomous Learning Group 21 Dec 03, 2022
Meta Learning for Semi-Supervised Few-Shot Classification

few-shot-ssl-public Code for paper Meta-Learning for Semi-Supervised Few-Shot Classification. [arxiv] Dependencies cv2 numpy pandas python 2.7 / 3.5+

Mengye Ren 501 Jan 08, 2023
🤗 Push your spaCy pipelines to the Hugging Face Hub

spacy-huggingface-hub: Push your spaCy pipelines to the Hugging Face Hub This package provides a CLI command for uploading any trained spaCy pipeline

Explosion 30 Oct 09, 2022
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.

Documentation | FAQ | Release Notes | Roadmap | MACE Model Zoo | Demo | Join Us | 中文 Mobile AI Compute Engine (or MACE for short) is a deep learning i

Xiaomi 4.7k Dec 29, 2022
A collection of semantic image segmentation models implemented in TensorFlow

A collection of semantic image segmentation models implemented in TensorFlow. Contains data-loaders for the generic and medical benchmark datasets.

bobby 16 Dec 06, 2019
Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis

Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis, including human motion imitation, appearance transfer, and novel view synthesis. Currently the paper is under review

2.3k Jan 05, 2023
Dynamic wallpaper generator.

Wiki • About • Installation About This project is a dynamic wallpaper changer. It waits untill you turn on the music, downloads album cover if it's po

3 Sep 18, 2021
A New Approach to Overgenerating and Scoring Abstractive Summaries

We provide the source code for the paper "A New Approach to Overgenerating and Scoring Abstractive Summaries" accepted at NAACL'21. If you find the code useful, please cite the following paper.

Kaiqiang Song 4 Apr 03, 2022
Generative Modelling of BRDF Textures from Flash Images [SIGGRAPH Asia, 2021]

Neural Material Official code repository for the paper: Generative Modelling of BRDF Textures from Flash Images [SIGGRAPH Asia, 2021] Henzler, Deschai

Philipp Henzler 80 Dec 20, 2022
Graph Attention Networks

GAT Graph Attention Networks (Veličković et al., ICLR 2018): https://arxiv.org/abs/1710.10903 GAT layer t-SNE + Attention coefficients on Cora Overvie

Petar Veličković 2.6k Jan 05, 2023
Official pytorch implementation of Rainbow Memory (CVPR 2021)

Rainbow Memory: Continual Learning with a Memory of Diverse Samples

Clova AI Research 91 Dec 17, 2022
General neural ODE and DAE modules for power system dynamic modeling.

Py_PSNODE General neural ODE and DAE modules for power system dynamic modeling. The PyTorch-based ODE solver is developed based on torchdiffeq. Sample

14 Dec 31, 2022
Implementation of FitVid video prediction model in JAX/Flax.

FitVid Video Prediction Model Implementation of FitVid video prediction model in JAX/Flax. If you find this code useful, please cite it in your paper:

Google Research 62 Nov 25, 2022
Code and datasets for the paper "KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction"

KnowPrompt Code and datasets for our paper "KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction" Requireme

ZJUNLP 137 Dec 31, 2022
Generate Contextual Directory Wordlist For Target Org

PathPermutor Generate Contextual Directory Wordlist For Target Org This script generates contextual wordlist for any target org based on the set of UR

8 Jun 23, 2021