Physics-Informed Neural Networks (PINN) and Deep BSDE Solvers of Differential Equations for Scientific Machine Learning (SciML) accelerated simulation

Overview

NeuralPDE

Join the chat at https://gitter.im/JuliaDiffEq/Lobby Build Status Build status codecov.io Stable Dev

NeuralPDE.jl is a solver package which consists of neural network solvers for partial differential equations using scientific machine learning (SciML) techniques such as physics-informed neural networks (PINNs) and deep BSDE solvers. This package utilizes deep neural networks and neural stochastic differential equations to solve high-dimensional PDEs at a greatly reduced cost and greatly increased generality compared with classical methods.

Installation

Assuming that you already have Julia correctly installed, it suffices to install NeuralPDE.jl in the standard way, that is, by typing ] add NeuralPDE. Note: to exit the Pkg REPL-mode, just press Backspace or Ctrl + C.

Tutorials and Documentation

For information on using the package, see the stable documentation. Use the in-development documentation for the version of the documentation, which contains the unreleased features.

Features

  • Physics-Informed Neural Networks for automated PDE solving.
  • Forward-Backwards Stochastic Differential Equation (FBSDE) methods for parabolic PDEs.
  • Deep-learning-based solvers for optimal stopping time and Kolmogorov backwards equations.

Example: Solving 2D Poisson Equation via Physics-Informed Neural Networks

using NeuralPDE, Flux, ModelingToolkit, GalacticOptim, DiffEqFlux
using Quadrature, Cubature
import ModelingToolkit: Interval, infimum, supremum

@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2

# 2D PDE
eq  = Dxx(u(x,y)) + Dyy(u(x,y)) ~ -sin(pi*x)*sin(pi*y)

# Boundary conditions
bcs = [u(0,y) ~ 0.0, u(1,y) ~ -sin(pi*1)*sin(pi*y),
       u(x,0) ~ 0.0, u(x,1) ~ -sin(pi*x)*sin(pi*1)]
# Space and time domains
domains = [x  Interval(0.0,1.0),
           y  Interval(0.0,1.0)]
# Discretization
dx = 0.1

# Neural network
dim = 2 # number of dimensions
chain = FastChain(FastDense(dim,16,Flux.σ),FastDense(16,16,Flux.σ),FastDense(16,1))

# Initial parameters of Neural network
initθ = Float64.(DiffEqFlux.initial_params(chain))

discretization = PhysicsInformedNN(chain, QuadratureTraining(),init_params =initθ)

@named pde_system = PDESystem(eq,bcs,domains,[x,y],[u(x, y)])
prob = discretize(pde_system,discretization)

cb = function (p,l)
    println("Current loss is: $l")
    return false
end

res = GalacticOptim.solve(prob, ADAM(0.1); cb = cb, maxiters=4000)
prob = remake(prob,u0=res.minimizer)
res = GalacticOptim.solve(prob, ADAM(0.01); cb = cb, maxiters=2000)
phi = discretization.phi

And some analysis:

xs,ys = [infimum(d.domain):dx/10:supremum(d.domain) for d in domains]
analytic_sol_func(x,y) = (sin(pi*x)*sin(pi*y))/(2pi^2)

u_predict = reshape([first(phi([x,y],res.minimizer)) for x in xs for y in ys],(length(xs),length(ys)))
u_real = reshape([analytic_sol_func(x,y) for x in xs for y in ys], (length(xs),length(ys)))
diff_u = abs.(u_predict .- u_real)

using Plots
p1 = plot(xs, ys, u_real, linetype=:contourf,title = "analytic");
p2 = plot(xs, ys, u_predict, linetype=:contourf,title = "predict");
p3 = plot(xs, ys, diff_u,linetype=:contourf,title = "error");
plot(p1,p2,p3)

image

Example: Solving a 100-Dimensional Hamilton-Jacobi-Bellman Equation

using NeuralPDE
using Flux
using DifferentialEquations
using LinearAlgebra
d = 100 # number of dimensions
X0 = fill(0.0f0, d) # initial value of stochastic control process
tspan = (0.0f0, 1.0f0)
λ = 1.0f0

g(X) = log(0.5f0 + 0.5f0 * sum(X.^2))
f(X,u,σᵀ∇u,p,t) = -λ * sum(σᵀ∇u.^2)
μ_f(X,p,t) = zero(X)  # Vector d x 1 λ
σ_f(X,p,t) = Diagonal(sqrt(2.0f0) * ones(Float32, d)) # Matrix d x d
prob = TerminalPDEProblem(g, f, μ_f, σ_f, X0, tspan)
hls = 10 + d # hidden layer size
opt = Flux.ADAM(0.01)  # optimizer
# sub-neural network approximating solutions at the desired point
u0 = Flux.Chain(Dense(d, hls, relu),
                Dense(hls, hls, relu),
                Dense(hls, 1))
# sub-neural network approximating the spatial gradients at time point
σᵀ∇u = Flux.Chain(Dense(d + 1, hls, relu),
                  Dense(hls, hls, relu),
                  Dense(hls, hls, relu),
                  Dense(hls, d))
pdealg = NNPDENS(u0, σᵀ∇u, opt=opt)
@time ans = solve(prob, pdealg, verbose=true, maxiters=100, trajectories=100,
                            alg=EM(), dt=1.2, pabstol=1f-2)

Citation

If you use NeuralPDE.jl in your research, please cite this paper:

@article{zubov2021neuralpde,
  title={NeuralPDE: Automating Physics-Informed Neural Networks (PINNs) with Error Approximations},
  author={Zubov, Kirill and McCarthy, Zoe and Ma, Yingbo and Calisto, Francesco and Pagliarino, Valerio and Azeglio, Simone and Bottero, Luca and Luj{\'a}n, Emmanuel and Sulzer, Valentin and Bharambe, Ashutosh and others},
  journal={arXiv preprint arXiv:2107.09443},
  year={2021}
}
Comments
  • TagBot trigger issue

    TagBot trigger issue

    This issue is used to trigger TagBot; feel free to unsubscribe.

    If you haven't already, you should update your TagBot.yml to include issue comment triggers. Please see this post on Discourse for instructions and more details.

    If you'd like for me to do this for you, comment TagBot fix on this issue. I'll open a PR within a few hours, please be patient!

    opened by JuliaTagBot 48
  • Neural adapter test is broken

    Neural adapter test is broken

    Seems like the 2D Poisson equation with Neural adapter test is broken. I tested it in master and it failed. Seems to be related to ChainRulesCore.

    ERROR: LoadError: MethodError: no method matching *(::Tuple{Int64, Int64})
    Closest candidates are:
      *(::Any, ::ChainRulesCore.Tangent) at /Users/gabrielbirnbaum/.julia/packages/ChainRulesCore/1L9My/src/tangent_arithmetic.jl:151
      *(::Any, ::ChainRulesCore.AbstractThunk) at /Users/gabrielbirnbaum/.julia/packages/ChainRulesCore/1L9My/src/tangent_arithmetic.jl:125
      *(::Any, ::ChainRulesCore.ZeroTangent) at /Users/gabrielbirnbaum/.julia/packages/ChainRulesCore/1L9My/src/tangent_arithmetic.jl:104
    
    using Flux
    using DiffEqFlux
    using ModelingToolkit
    using Test, NeuralPDE
    using GalacticOptim
    using SciMLBase
    import ModelingToolkit: Interval
    
    ## Example, 2D Poisson equation with Neural adapter
    println("Example, 2D Poisson equation with Neural adapter")
    @parameters x y
    @variables u(..)
    Dxx = Differential(x)^2
    Dyy = Differential(y)^2
    
    # 2D PDE
    eq  = Dxx(u(x,y)) + Dyy(u(x,y)) ~ -sin(pi*x)*sin(pi*y)
    
    # Initial and boundary conditions
    bcs = [u(0,y) ~ 0.0, u(1,y) ~ -sin(pi*1)*sin(pi*y),
           u(x,0) ~ 0.0, u(x,1) ~ -sin(pi*x)*sin(pi*1)]
    # Space and time domains
    domains = [x ∈ Interval(0.0,1.0),
               y ∈ Interval(0.0,1.0)]
    quadrature_strategy = NeuralPDE.QuadratureTraining(reltol=1e-2,abstol=1e-2,
                                                       maxiters =50, batch=100)
    inner = 8
    af = Flux.tanh
    chain1 = Chain(Dense(2,inner,af),
                   Dense(inner,inner,af),
                   Dense(inner,1))
    initθ = Float64.(DiffEqFlux.initial_params(chain1))
    discretization = NeuralPDE.PhysicsInformedNN(chain1,
                                                 quadrature_strategy;
                                                 init_params = initθ)
    
    @named pde_system = PDESystem(eq,bcs,domains,[x,y],[u(x,y)])
    prob = NeuralPDE.discretize(pde_system,discretization)
    sym_prob = NeuralPDE.symbolic_discretize(pde_system,discretization)
    res = GalacticOptim.solve(prob, BFGS();  maxiters=2000) #  LoadError: MethodError: no method matching *(::Tuple{Int64, Int64})
    
    opened by killah-t-cell 31
  • Update to MTK5: some problem with the new Differential syntax

    Update to MTK5: some problem with the new Differential syntax

    Trying to run PINN fp doc example, the MTK5 update of

    @parameters x @variables p(..) Dx = Differential(x) Dxx = Differential(x)^2

    yields the following error:

    MethodError: no method matching ^(::Differential, ::Int64) Closest candidates are: ^(!Matched::Float32, ::Integer) at math.jl:907 ^(!Matched::Irrational{:ℯ}, ::Integer) at mathconstants.jl:91 ^(!Matched::Irrational{:ℯ}, ::Number) at mathconstants.jl:91 ...

    Stacktrace: [1] macro expansion at .\none:0 [inlined] [2] literal_pow(::typeof(^), ::Differential, ::Val{2}) at .\none:0 [3] top-level scope at In[3]:4 [4] include_string(::Function, ::Module, ::String, ::String) at .\loading.jl:1091

    opened by finmod 29
  • Issue with 1D wave equation example

    Issue with 1D wave equation example

    Issue

    Looking at the 1D wave equation example, I am not sure if the presented solution is correct. I discussed this on a julia discourse thread. To briefly summarize the solution in that NeuralPDE.jl example isn't what I would expect to see, and it doesn't match the results from a matlab wave-solver when I try to replicate the results.

    Then when I went to convert the NeuralPDE example to a "purely" ModelingToolkit version I got a solution that looked like it was from a diffusion problem. I did have a similar problem when I wrote a custom wave-solver using a spectral method, which ended up being due to improperly defined initial conditions (i.e. du(0,x)/dx = 0 instead of the derivative of the u(0,x)). So I exchanged Dt(u(0,x)) ~ 0. for Dx(u(0,x)) ~ 1-2x in the bcs without it having any effect.

    Matlab Code

    clearvars;
    
    % =========================================================================
    % SIMULATION
    % =========================================================================
    
    % create the computational grid
    Lx =1;           
    dx = 0.1;                 % grid point spacing in the x direction [m]    
    Nx = round(Lx/dx); % number of grid points in the x (row) direction
    
    kgrid = kWaveGrid(Nx, dx);
    
    % define the properties of the propagation medium
    medium.sound_speed = 1;  % [m/s] 
    
    kgrid.makeTime(medium.sound_speed, [], 5);
    
    % create initial pressure distribution
    x = 0:dx:Lx-dx;
    p0 = x.*(1-x);
    source.p0=p0';
    
    % define a sensor
    sensor.mask = zeros(1,Nx);
    
    % run the simulation
    args = {'PMLInside', false, 'RecordMovie', true};
    sensor_data = kspaceFirstOrder1D(kgrid, medium, source, sensor,args{:});
    

    Matlab Results

    Keep in mind the solution below has a larger spatial dim than the NeuralPDE example, due to the fact that this wave solver is using a spectral method and has to have a perfectly-matched-layer to avoid "wrap around" effects.

    025e99b32f49b0d76aa910db4dbb826eb8170c0f

    Converted NeuralPDE Example

    using Plots, DifferentialEquations, ModelingToolkit, DiffEqOperators
    
    @parameters t, x
    @variables u(..)
    Dxx = Differential(x)^2
    Dtt = Differential(t)^2
    Dt = Differential(t)
    Dx = Differential(x)
    
    #2D PDE
    C=1
    eq  = Dtt(u(t,x)) ~ C^2*Dxx(u(t,x))
    
    # Initial and boundary conditions
    bcs = [u(t,0) ~ 0.,# for all t > 0
           u(t,1) ~ 0.,# for all t > 0
           u(0,x) ~ x*(1. - x), #for all 0 < x < 1
           Dt(u(0,x)) ~ 0.0, #for all  0 < x < 1
    ]
    
    # Space and time domains
    domains = [t ∈ (0.0,1.0),
               x ∈ (0.0,1.0)]
    # Method of lines discretization
    dx = 0.1
    order = 2
    discretization = MOLFiniteDifference([x=>dx],t)
    
    # PDE system
    pdesys = PDESystem(eq,bcs,domains,[t,x],[u(t,x)])
    
    
    # Convert the PDE problem into an ODE problem
    prob = discretize(pdesys,discretization)
    
    # Solve ODE problem
    sol = solve(prob)
    
    # Plot results
    anim = @animate for i ∈ 1:length(sol.t)
    plot(sol.u[i], label = "wave", ylims =[-0.25, 0.25])
    end every 5
    
    gif(anim, "1Dwave.gif", fps = 10)
    

    Converted NeuralPDE Results

    2f4d411c9871ba53bb63ce4b3ceb5c7d9499cb27

    good first issue 
    opened by alexpattyn 28
  • System of PDEs with CUDA?

    System of PDEs with CUDA?

    I tried to adapt the https://neuralpde.sciml.ai/dev/pinn/2D/ GPU tutorial to a system of PDEs and unfortunately failed. I need to turn the initθ into a CuArray but I get a warning that Scalar indexing is disallowed. What is the performant/correct way to do the mapping I am doing here with CUDA?

    using Flux, CUDA, DiffEqFlux
    chain = [FastChain(FastDense(3, 16, Flux.σ), FastDense(16,16,Flux.σ), FastDense(16, 1)),
             FastChain(FastDense(2, 16, Flux.σ), FastDense(16,16,Flux.σ), FastDense(16, 1))]
    
    initθ = map(c -> CuArray(Float64.(c)), DiffEqFlux.initial_params.(chain))
    
    opened by killah-t-cell 25
  • ZDM / GB heterogeneous input

    ZDM / GB heterogeneous input

    I started working on @zoemcc's PR last weekend because 1. I really want us to support heterogeneous inputs and 2. I thought it would be a good learning opportunity given that Zoe has done such a great job thinking about the architecture of this. My goal was to bring this PR up to date with the code base and get rid of remaining bugs so we could merge.

    At this point this is almost done and you can check out Zoe's PR for an explanation of how this works https://github.com/SciML/NeuralPDE.jl/pull/298. This is mostly her work and I just opened a new branch because it was more pleasant to merge it bit-by-bit. I hope that is ok!

    The PR is up to date with master, and I fixed a few bugs. The main bug that remains is that I still get a LoadError: DimensionMismatch("A has dimensions (3,1) but B has dimensions (2,30)") when I try to run a heterogeneous system. I am still not sure where this bug is coming from, but it probably has something to do with the fact that you need a array of chains of different input sizes to get heterogeneous systems to work and we didn't handle that correctly somewhere (probably in build_symbolic_function or discretize).

    opened by killah-t-cell 25
  • Support Inf Integrals (round 2)

    Support Inf Integrals (round 2)

    I gave this issue another shot with fresh eyes and made some real progress. The symbolic transformation now looks right.

    For this system

    @parameters v x t
    @variables f(..)
    Iv = Integral((t,x) in DomainSets.ProductDomain(ClosedInterval(-Inf, Inf),ClosedInterval(-Inf, Inf)))
    Dx = Differential(x)
    eqs_ = Iv(f(t, x, v)*x) + Dx(f(t,x,v)) ~ π
    
    bcs = [f(0,x,v) ~ 2]
    
    domains = [t ∈ Interval(0.0, 1.0),
            x ∈ Interval(0.0, 1.0), 
            v ∈ Interval(0.0, 1.0)]
    
    # Neural Network
    chain = [FastChain(FastDense(3, 16, Flux.σ), FastDense(16,16,Flux.σ), FastDense(16, 1))]
    initθ = map(c -> Float64.(c), DiffEqFlux.initial_params.(chain))
    
    discretization = NeuralPDE.PhysicsInformedNN(chain, QuadratureTraining(), init_params= initθ)
    @named pde_system = PDESystem(eqs_, bcs, domains, [t,x,v], [f(t,x,v)])
    prob = SciMLBase.symbolic_discretize(pde_system, discretization)
    prob = SciMLBase.discretize(pde_system, discretization)
    
    cb = function (p,l)
        println("Current loss is: $l")
        return false
    end
    
    res = GalacticOptim.solve(prob, BFGS(); cb=cb, maxiters=100)
    

    The symbolic transformation is

    (Expr[:((cord, var"##θ#333", phi, derivative, integral, u, p)->begin
              begin
                  (var"##θ#3331",) = (var"##θ#333"[1:353],)
                  (phi1,) = (phi[1],)
                  let (t, x, v) = (cord[[1], :], cord[[2], :], cord[[3], :])
                      begin
                          cord1 = vcat(t, x, v)
                      end
                      (+).(integral(u, cord1, phi, [1, 2], RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#333"), :phi, :derivative, :integral, :u, :p), Main.NeuralPDE.var"#_RGF_ModTag", Main.NeuralPDE.var"#_RGF_ModTag", (0x1892a7ea, 0xc1142249, 0x44da67b6, 0x4db8ecb4, 0x0fa6b1a4)}(quote
        begin
            (var"##θ#3331",) = (var"##θ#333"[1:353],)
            (phi1,) = (phi[1],)
            let (t, x, v) = (fill(t ./ (1 .- t .^ 2), size(cord[[1], :])), fill(x ./ (1 .- x .^ 2), size(cord[[1], :])), cord[[1], :])
                begin
                    cord1 = vcat(t, x, v)
                end
                (*).((*).(x, u(cord1, var"##θ#3331", phi1)), (/).((+).(1, (^).(t, 2)), (^).((-).(1, (^).(t, 2)), 2)), (/).((+).(1, (^).(x, 2)), (^).((-).(1, (^).(x, 2)), 2)))
            end
        end
    end), Any[-1.0, -1.0], Any[1.0, 1.0], var"##θ#333"), derivative(phi1, u, cord1, [[0.0, 6.0554544523933395e-6, 0.0]], 1, var"##θ#3331")) .- π
                  end
              end
          end)], Expr[:((cord, var"##θ#333", phi, derivative, integral, u, p)->begin
              begin
                  (var"##θ#3331",) = (var"##θ#333"[1:353],)
                  (phi1,) = (phi[1],)
                  let (t, x, v) = (fill(0, size(cord[[1], :])), cord[[1], :], cord[[2], :])
                      begin
                          cord1 = vcat(t, x, v)
                      end
                      u(cord1, var"##θ#3331", phi1) .- 2
                  end
              end
          end)])
    

    I am opening this as a draft PR, because I am still getting a LoadError: TypeError: non-boolean (Symbolics.Num) used in boolean context in solve.

    opened by killah-t-cell 24
  • Error in latest update_doc examples

    Error in latest update_doc examples

    In Example1 2D Poisson and many other examples, there is a bug in the syntax for Discretization:

    dx = 0.05 discretization = PhysicsInformedNN(chain, GridTraining(dx))

    Returns:

    MethodError: no method matching GridTraining() Closest candidates are: GridTraining(!Matched::Any) at C:\Users\Denis.julia\dev\NeuralPDE.jl-master\src\pinns_pde_solve.jl:64

    Stacktrace: [1] PhysicsInformedNN(::Function, ::GridTraining) at C:\Users\Denis.julia\dev\NeuralPDE.jl-master\src\pinns_pde_solve.jl:28 [2] top-level scope at In[13]:3 [3] include_string(::Function, ::Module, ::String, ::String) at .\loading.jl:1091

    opened by finmod 24
  • Potential gradient issues with Flux chains when changing parameter type

    Potential gradient issues with Flux chains when changing parameter type

    MWE:

    using DiffEqFlux, Flux, NeuralPDE, ModelingToolkit, DomainSets, Optimization, OptimizationFlux, Test
    
    @parameters x y
    @variables u(..)
    Dxx = Differential(x)^2
    Dyy = Differential(y)^2
    
    # 2D PDE
    eq  = Dxx(u(x,y)) + Dyy(u(x,y)) ~ -sin(pi*x)*sin(pi*y)
    
    # Initial and boundary conditions
    bcs = [u(0,y) ~ 0.0, u(1,y) ~ -sin(pi*1)*sin(pi*y),
           u(x,0) ~ 0.0, u(x,1) ~ -sin(pi*x)*sin(pi*1)]
    # Space and time domains
    domains = [x ∈ Interval(0.0,1.0),
               y ∈ Interval(0.0,1.0)]
    
    @named pde_system = PDESystem(eq,bcs,domains,[x,y],[u(x, y)])
    
    fastchain = FastChain(FastDense(2,12,Flux.σ),FastDense(12,12,Flux.σ),FastDense(12,1))
    fluxchain = Chain(Dense(2,12,Flux.σ),Dense(12,12,Flux.σ),Dense(12,1))
    initθ = Float64.(DiffEqFlux.initial_params(fastchain))
    grid_strategy = NeuralPDE.GridTraining(0.1)
    
    p,re = Flux.destructure(fluxchain)
    
    discretization1 = NeuralPDE.PhysicsInformedNN(fastchain,
                                                 grid_strategy;
                                                 init_params = initθ)
    
    discretization2 = NeuralPDE.PhysicsInformedNN(fluxchain,
                                                 grid_strategy;
                                                 init_params = initθ)
    
    
    prob1 = NeuralPDE.discretize(pde_system,discretization1)
    prob2 = NeuralPDE.discretize(pde_system,discretization2)
    sym_prob = NeuralPDE.symbolic_discretize(pde_system,discretization1)
    
    Zygote.gradient((x)->prob1.f(x,nothing),initθ)
    Zygote.gradient((x)->prob2.f(x,nothing),initθ) # Very very different???
    
    function callback(p,l)
        @show l
        false
    end
    res = Optimization.solve(prob1, ADAM(0.1); callback=callback,maxiters=1000)
    phi = discretization1.phi
    
    xs,ys = [infimum(d.domain):0.01:supremum(d.domain) for d in domains]
    analytic_sol_func(x,y) = (sin(pi*x)*sin(pi*y))/(2pi^2)
    
    u_predict = reshape([first(phi([x,y],res.minimizer)) for x in xs for y in ys],(length(xs),length(ys)))
    u_real = reshape([analytic_sol_func(x,y) for x in xs for y in ys], (length(xs),length(ys)))
    diff_u = abs.(u_predict .- u_real)
    
    @show maximum(abs2,u_predict - u_real)
    @test u_predict ≈ u_real atol = 2.0
    
    res = Optimization.solve(prob2, ADAM(0.1); callback=callback,maxiters=1000)
    phi = discretization2.phi
    
    xs,ys = [infimum(d.domain):0.01:supremum(d.domain) for d in domains]
    analytic_sol_func(x,y) = (sin(pi*x)*sin(pi*y))/(2pi^2)
    
    u_predict = reshape([first(phi([x,y],res.minimizer)) for x in xs for y in ys],(length(xs),length(ys)))
    u_real = reshape([analytic_sol_func(x,y) for x in xs for y in ys], (length(xs),length(ys)))
    diff_u = abs.(u_predict .- u_real)
    
    @show maximum(abs2,u_predict - u_real)
    @test u_predict ≈ u_real atol = 2.0
    

    See fluxchain fails and the gradient is off.

    bug 
    opened by ChrisRackauckas 21
  • Tutorial not working

    Tutorial not working

    I tried the 2D Poisson's equation Tutorial () and got the following error executing the solver:

    `julia> res = GalacticOptim.solve(prob, BFGS(), progress = false; cb = cb, maxiters=1000)

    ERROR: MethodError: no method matching Optim.Options(; extended_trace=true, callback=GalacticOptim.var"#_cb#25"{var"#1#2",BFGS{LineSearches.InitialStatic{Float64},LineSearches.HagerZhang{Float64,Base.RefValue{Bool}},Nothing,Nothing,Flat},Base.Iterators.Cycle{Tuple{GalacticOptim.NullData}}}(var"#1#2"(), BFGS{LineSearches.InitialStatic{Float64},LineSearches.HagerZhang{Float64,Base.RefValue{Bool}},Nothing,Nothing,Flat}(LineSearches.InitialStatic{Float64} alpha: Float64 1.0 scaled: Bool false , LineSearches.HagerZhang{Float64,Base.RefValue{Bool}} delta: Float64 0.1 sigma: Float64 0.9 alphamax: Float64 Inf rho: Float64 5.0 epsilon: Float64 1.0e-6 gamma: Float64 0.66 linesearchmax: Int64 50 psi3: Float64 0.1 display: Int64 0 mayterminate: Base.RefValue{Bool} , nothing, nothing, Flat()), Base.Iterators.Cycle{Tuple{GalacticOptim.NullData}}((GalacticOptim.NullData(),)), Core.Box(2), Core.Box(GalacticOptim.NullData()), Core.Box(#undef)), iterations=1000, progress=false) Closest candidates are: Optim.Options(; x_tol, f_tol, g_tol, x_abstol, x_reltol, f_abstol, f_reltol, g_abstol, g_reltol, outer_x_tol, outer_f_tol, outer_g_tol, outer_x_abstol, outer_x_reltol, outer_f_abstol, outer_f_reltol, outer_g_abstol, outer_g_reltol, f_calls_limit, g_calls_limit, h_calls_limit, allow_f_increases, allow_outer_f_increases, successive_f_tol, iterations, outer_iterations, store_trace, trace_simplex, show_trace, extended_trace, show_every, callback, time_limit) at /home/ah/.julia/packages/Optim/auGGa/src/types.jl:73 got unsupported keyword argument "progress" Optim.Options(::T, ::T, ::T, ::T, ::T, ::T, ::T, ::T, ::T, ::T, ::T, ::T, ::Int64, ::Int64, ::Int64, ::Bool, ::Bool, ::Int64, ::Int64, ::Int64, ::Bool, ::Bool, ::Bool, ::Bool, ::Int64, ::TCallback, ::Float64) where {T, TCallback} at /home/ah/.julia/packages/Optim/auGGa/src/types.jl:44 got unsupported keyword arguments "extended_trace", "callback", "iterations", "progress" Stacktrace: [1] kwerr(::NamedTuple{(:extended_trace, :callback, :iterations, :progress),Tuple{Bool,GalacticOptim.var"#_cb#25"{var"#1#2",BFGS{LineSearches.InitialStatic{Float64},LineSearches.HagerZhang{Float64,Base.RefValue{Bool}},Nothing,Nothing,Flat},Base.Iterators.Cycle{Tuple{GalacticOptim.NullData}}},Int64,Bool}}, ::Type{T} where T) at ./error.jl:157 [2] __solve(::OptimizationProblem{true,OptimizationFunction{true,GalacticOptim.AutoZygote,NeuralPDE.var"#loss_function#191"{NeuralPDE.var"#177#183"{Int64,NeuralPDE.var"#175#181"{NeuralPDE.var"#168#170"{FastChain{Tuple{FastDense{typeof(σ),DiffEqFlux.var"#initial_params#73"{typeof(Flux.glorot_uniform),typeof(Flux.zeros),Int64,Int64}},FastDense{typeof(σ),DiffEqFlux.var"#initial_params#73"{typeof(Flux.glorot_uniform),typeof(Flux.zeros),Int64,Int64}},FastDense{typeof(identity),DiffEqFlux.var"#initial_params#73"{typeof(Flux.glorot_uniform),typeof(Flux.zeros),Int64,Int64}}}}},NeuralPDE.var"#172#173"{Float32},NeuralPDE.var"#inner_loss#179"}},NeuralPDE.var"#177#183"{Int64,NeuralPDE.var"#175#181"{NeuralPDE.var"#168#170"{FastChain{Tuple{FastDense{typeof(σ),DiffEqFlux.var"#initial_params#73"{typeof(Flux.glorot_uniform),typeof(Flux.zeros),Int64,Int64}},FastDense{typeof(σ),DiffEqFlux.var"#initial_params#73"{typeof(Flux.glorot_uniform),typeof(Flux.zeros),Int64,Int64}},FastDense{typeof(identity),DiffEqFlux.var"#initial_params#73"{typeof(Flux.glorot_uniform),typeof(Flux.zeros),Int64,Int64}}}}},NeuralPDE.var"#172#173"{Float32},NeuralPDE.var"#inner_loss#179"}}},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Float32,1},DiffEqBase.NullParameters,Nothing,Nothing,Nothing,Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}}, ::BFGS{LineSearches.InitialStatic{Float64},LineSearches.HagerZhang{Float64,Base.RefValue{Bool}},Nothing,Nothing,Flat}, ::Base.Iterators.Cycle{Tuple{GalacticOptim.NullData}}; cb::Function, maxiters::Int64, kwargs::Base.Iterators.Pairs{Symbol,Bool,Tuple{Symbol},NamedTuple{(:progress,),Tuple{Bool}}}) at /home/ah/.julia/packages/GalacticOptim/TfGcK/src/solve.jl:208 [3] #solve#1 at /home/ah/.julia/packages/GalacticOptim/TfGcK/src/solve.jl:12 [inlined] [4] top-level scope at REPL[16]:1 ` Should the tutorial code be updated or is there another problem?

    opened by ahenkes1 21
  • Test Errors?

    Test Errors?

    I ran: https://github.com/JuliaDiffEq/NeuralNetDiffEq.jl/blob/master/test/NNPDENS_tests.jl

    For "Black-Scholes-Barenblatt equation"

    ans = solve(prob, pdealg, verbose=true, maxiters=250, trajectories=m,
    alg=EM(), dt=dt, pabstol = 1f-6)
    

    It says: MethodError: no method matching Float32(::Tracker.TrackedReal{Float32}) Closest candidates are: Float32(::Real, !Matched::RoundingMode) where T<:AbstractFloat at rounding.jl:200 Float32(::T) where T<:Number at boot.jl:718 Float32(!Matched::Int8) at float.jl:60

    "Nonlinear Black-Scholes Equation with Default Risk"

    @time ans = solve(prob, pdealg, verbose=true, maxiters=200, trajectories=m,
                                alg=EM(), dt=dt, pabstol = 1f-6)
    

    Says: MethodError: vcat(::TrackedArray{…,Array{Float32,1}}, ::Array{Tracker.TrackedReal{Float32},1}) is ambiguous. Candidates:

    opened by azev77 19
  • Allowing a function to be called multiple times with different inputs

    Allowing a function to be called multiple times with different inputs

    Currently something like u(x) - u(0) ~ sin(x) gets parsed as u(x) - u(x) ~ sin(x) when generating the loss function, because all instances of u(anything) are assumed to have the same input. I've made changes to _transform_expression and dot that generate the correct loss function, and I made a minor edit to one 1D test case changing u(x) ~ xcos(...) to u(x) - u(0) ~ xcos(...), to test these changes and mimic someone trying to train a network that they new passed through (0,0) and so would represent by g(x) = u(x) - u(0).

    opened by nicholaskl97 3
  • CompatHelper: bump compat for QuasiMonteCarlo to 0.3 for package docs, (keep existing compat)

    CompatHelper: bump compat for QuasiMonteCarlo to 0.3 for package docs, (keep existing compat)

    This pull request changes the compat entry for the QuasiMonteCarlo package from 0.2 to 0.2, 0.3 for package docs. This keeps the compat entries for earlier versions.

    Note: I have not tested your package with this new compat entry. It is your responsibility to make sure that your package tests pass before you merge this pull request.

    opened by github-actions[bot] 0
  • CompatHelper: bump compat for QuasiMonteCarlo to 0.3, (keep existing compat)

    CompatHelper: bump compat for QuasiMonteCarlo to 0.3, (keep existing compat)

    This pull request changes the compat entry for the QuasiMonteCarlo package from 0.2.1 to 0.2.1, 0.3. This keeps the compat entries for earlier versions.

    Note: I have not tested your package with this new compat entry. It is your responsibility to make sure that your package tests pass before you merge this pull request.

    opened by github-actions[bot] 0
  • CompatHelper: bump compat for DomainSets to 0.6 for package docs, (keep existing compat)

    CompatHelper: bump compat for DomainSets to 0.6 for package docs, (keep existing compat)

    This pull request changes the compat entry for the DomainSets package from 0.5 to 0.5, 0.6 for package docs. This keeps the compat entries for earlier versions.

    Note: I have not tested your package with this new compat entry. It is your responsibility to make sure that your package tests pass before you merge this pull request.

    opened by github-actions[bot] 0
  • CompatHelper: bump compat for DomainSets to 0.6, (keep existing compat)

    CompatHelper: bump compat for DomainSets to 0.6, (keep existing compat)

    This pull request changes the compat entry for the DomainSets package from 0.5 to 0.5, 0.6. This keeps the compat entries for earlier versions.

    Note: I have not tested your package with this new compat entry. It is your responsibility to make sure that your package tests pass before you merge this pull request.

    opened by github-actions[bot] 0
  • Multi Dimensional PDEs

    Multi Dimensional PDEs

    Hi, I was about to solve a multi-dimensional Stochastic PDE like the one that used to be in the documentation. No wonder why you removed it, the old code does not work. Therefore, what modifications do I have to do to make it work now? I dig out the code from an old commit

    using NeuralPDE
    using Flux
    using DifferentialEquations
    using LinearAlgebra
    d = 100 # number of dimensions
    X0 = fill(0.0f0, d) # initial value of stochastic control process
    tspan = (0.0f0, 1.0f0)
    λ = 1.0f0
    
    g(X) = log(0.5f0 + 0.5f0 * sum(X.^2))
    f(X,u,σᵀ∇u,p,t) = -λ * sum(σᵀ∇u.^2)
    μ_f(X,p,t) = zero(X)  # Vector d x 1 λ
    σ_f(X,p,t) = Diagonal(sqrt(2.0f0) * ones(Float32, d)) # Matrix d x d
    prob = TerminalPDEProblem(g, f, μ_f, σ_f, X0, tspan)
    hls = 10 + d # hidden layer size
    opt = Flux.ADAM(0.01)  # optimizer
    # sub-neural network approximating solutions at the desired point
    u0 = Flux.Chain(Dense(d, hls, relu),
                    Dense(hls, hls, relu),
                    Dense(hls, 1))
    # sub-neural network approximating the spatial gradients at time point
    σᵀ∇u = Flux.Chain(Dense(d + 1, hls, relu),
                      Dense(hls, hls, relu),
                      Dense(hls, hls, relu),
                      Dense(hls, d))
    pdealg = NNPDENS(u0, σᵀ∇u, opt=opt)
    @time ans = solve(prob, pdealg, verbose=true, maxiters=100, trajectories=100,
                                alg=EM(), dt=1.2, pabstol=1f-2)
    

    I really appreciate if you could help me. Thanks.

    opened by cdelv 1
Releases(v5.3.0)
  • v5.3.0(Sep 24, 2022)

    NeuralPDE v5.3.0

    Diff since v5.2.0

    Closed issues:

    • How to using additional_loss input 3D data. (#589)
    • How to use GPU to [chain1,chian2]? (#594)
    • ERROR: MethodError: no method matching NeuralPDE.Phi (#595)
    • Poisson Example not working on Julia 1.8 (#602)
    • The implementation of finite differences throws away its advantage over AD (#607)

    Merged pull requests:

    • unify docs (#590) (@ArnoStrouwen)
    • [skip ci] badges (#596) (@ArnoStrouwen)
    • doc cov (#597) (@ArnoStrouwen)
    • MassInstallAction: Install the Invalidations workflow on this repository (#601) (@devmotion)
    • Adapt states to GPU correctly (#604) (@YichengDWu)
    • Use different step sizes for different orders of derivatives (#608) (@YichengDWu)
    Source code(tar.gz)
    Source code(zip)
  • v5.2.0(Aug 18, 2022)

    NeuralPDE v5.2.0

    Diff since v5.1.1

    Closed issues:

    • Upstreaming ComponentArrays overloads for Adapt required for GPU (#584)
    • How to use real data? (#588)

    Merged pull requests:

    • Remove ComponentArrays overloads (#587) (@MilkshakeForReal)
    Source code(tar.gz)
    Source code(zip)
  • v5.1.1(Aug 17, 2022)

  • v5.1.0(Aug 15, 2022)

    NeuralPDE v5.1.0

    Diff since v5.0.0

    Closed issues:

    • Error in a simple free surface problem (#484)
    • Coupled boundary conditions for systems of PDE (#577)

    Merged pull requests:

    • fix typo (#570) (@MilkshakeForReal)
    • Fix another typo (#571) (@MilkshakeForReal)
    • fix grid strategy with PDE systems (#578) (@KirillZubov)
    • Lux on GPU (#583) (@MilkshakeForReal)
    Source code(tar.gz)
    Source code(zip)
  • v5.0.0(Jul 5, 2022)

    NeuralPDE v5.0.0

    Diff since v4.11.0

    Closed issues:

    • Pricing options using NeuralNetDiffEq (#68)
    • third derivative (#129)
    • periodic boundary conditions (#134)
    • Example 7 of PINN: Kuramoto–Sivashinsky equation (#138)
    • Support automatic differentiation of the NN inside the loss function? (#150)
    • Automatic weighting between equations (#155)
    • support ConstrainedEquation for PINNs (#176)
    • Handle Models with Intermediate Expressions (#178)
    • Using a loop algorithm instead of recursive for calculating the derivative (#193)
    • Support TensorBoardLogger.jl or something like this. (#194)
    • Nonlinear second-order boundary value problems (#203)
    • Flux NNs shouldn't have to destructure/restructure (#214)
    • 2D inhomogeneous biharmonic equation (#218)
    • Upgrade to MTK5: errors and omissions in PINN examples (#248)
    • Full Kolmogorov PDE Solver documentation (#258)
    • DeepONets (#268)
    • Adaptive loss reweighting for PINNs (#276)
    • Imposing positive definiteness of the Hessian (#280)
    • IfElse.ifelse fail broadcasting (#299)
    • Cannot import ModelingToolkit: Interval, infimum, supremum (#319)
    • GPU Low-level api example (#342)
    • Reduce precompile time (#368)
    • retrieving PINN result (#376)
    • Support derivative for @register function (#398)
    • Test on a simple integrodifferential PDE (#406)
    • System of PDEs with CUDA? (#410)
    • Issue MethodError (#417)
    • Why the default derivative method is a numerical derivative? (#427)
    • Models with integrals over an infinite interval have loss = NaN or loss = Inf (#435)
    • How to run get_phi(chain) (#437)
    • IDE system fails with GPU (#443)
    • Kuramoto–Sivashinsky equation (#445)
    • Should we default to domain decomposition? (#451)
    • How many boundary points and internal points? (#453)
    • An thinking of increasing speed using Adaptive-Activation-Functions (#457)
    • Systems of PDE using GPU (#460)
    • How to use GPU? (#462)
    • How many network models when solve the system of eqs? (#467)
    • Wave equation tutorial does not work (#478)
    • AbstractAdaptiveLoss (#489)
    • Specify loss function directly for NeuralPDE? (#496)
    • Example on PDE System to be updated (#502)
    • error at precompiling of the version 4.10 (#522)
    • NNPDEHan details: u0 and BatchNorm (#525)
    • Example from documentation errors (#526)
    • Potential gradient issues with Flux chains when changing parameter type (#533)
    • Clean up the nomenclature and document PINNLossFunctions (#549)

    Merged pull requests:

    • link from docs to repo (#524) (@ranocha)
    • Update docs for Optimization.jl (#527) (@ChrisRackauckas)
    • Simplify sin(pi) = 0 in poisson example (#530) (@albheim)
    • enable a few doctests (#532) (@ChrisRackauckas)
    • Simplify a bunch of things (#534) (@ChrisRackauckas)
    • removed stuff related to DeepBSDE solvers, including doc (#535) (@vboussange)
    • Major dependency and test clean up (#538) (@ChrisRackauckas)
    • Overhaul NNODE (#539) (@ChrisRackauckas)
    • clean up some NNODE and training strategies docs (#541) (@ChrisRackauckas)
    • format SciML Style (#542) (@ChrisRackauckas)
    • Fix NNODE doc tutorial (#543) (@ChrisRackauckas)
    • Refactor internals so debugging is not sensitive to the internals (#544) (@ChrisRackauckas)
    • Simplify all of symbolic discretize into just symbolic discretize (#545) (@ChrisRackauckas)
    • Finish marking PINN examples as doctests (#546) (@ChrisRackauckas)
    • Refactor training strategy handling (#547) (@ChrisRackauckas)
    • Better RNG seed? (#548) (@ChrisRackauckas)
    • Generalize the adaptive loss interface (#553) (@ChrisRackauckas)
    • batching in NNODE (#554) (@ChrisRackauckas)
    • Specialize higher order derivatives (#558) (@ChrisRackauckas)
    • Restructure the documentation (#560) (@ChrisRackauckas)
    • Update Zygote.ignore to ChainRulesCore.ignore_derivatives (#562) (@ChrisRackauckas)
    • CompatHelper: add new compat entry for ChainRulesCore at version 1, (keep existing compat) (#564) (@github-actions[bot])
    • FastChain -> Lux (#565) (@ChrisRackauckas)
    • Use dependent variable naming in the indexing (#567) (@ChrisRackauckas)
    • improve a bunch of docstrings (#568) (@ChrisRackauckas)
    • CompatHelper: add new compat entry for ComponentArrays at version 0.12, (keep existing compat) (#569) (@github-actions[bot])
    Source code(tar.gz)
    Source code(zip)
  • v4.11.0(Jun 4, 2022)

  • v4.10.1(Jun 2, 2022)

  • v4.10.0(May 31, 2022)

    NeuralPDE v4.10.0

    Diff since v4.9.0

    Merged pull requests:

    • Fix test imports (#519) (@ChrisRackauckas)
    • CompatHelper: bump compat for DocStringExtensions to 0.9, (keep existing compat) (#520) (@github-actions[bot])
    • Update for Quadrature -> Integral change (#521) (@ChrisRackauckas)
    Source code(tar.gz)
    Source code(zip)
  • v4.9.0(May 22, 2022)

    NeuralPDE v4.9.0

    Diff since v4.8.0

    Closed issues:

    • Dummy variable and upper limit of integral should be swapped (#517)

    Merged pull requests:

    • CompatHelper: bump compat for ArrayInterface to 6, (keep existing compat) (#516) (@github-actions[bot])
    • Fix variables in 1-dimensional IDE example (#518) (@elisno)
    Source code(tar.gz)
    Source code(zip)
  • v4.8.0(May 20, 2022)

  • v4.7.0(May 9, 2022)

    NeuralPDE v4.7.0

    Diff since v4.6.0

    Closed issues:

    • Register NeuralPDELogging (#503)
    • ERROR: syntax: extra token "can" after end of expression (#509)

    Merged pull requests:

    • Minor typos in API Documentation (#505) (@Saransh-cpp)
    • Test GalacticOptim 3 (#511) (@ChrisRackauckas)
    Source code(tar.gz)
    Source code(zip)
  • v4.6.0(Mar 23, 2022)

  • v4.5.1(Mar 6, 2022)

  • v4.5.0(Mar 4, 2022)

    NeuralPDE v4.5.0

    Diff since v4.4.0

    Closed issues:

    • Fourier neural operators (#309)
    • Problematic Example: Solving a 100-Dimensional Hamilton-Jacobi-Bellman Equation (#474)
    • My errors in some examples (#483)

    Merged pull requests:

    • TerminalPDEProblem REPL error (#476) (@KirillZubov)
    • fix wave eq. example (#477) (@ranocha)
    • Update damped wave docs (#480) (@KirillZubov)
    • Fix debugging docs (#491) (@de-souza)
    • CompatHelper: bump compat for ArrayInterface to 5, (keep existing compat) (#492) (@github-actions[bot])
    Source code(tar.gz)
    Source code(zip)
  • v4.4.0(Jan 10, 2022)

  • v4.3.0(Jan 5, 2022)

    NeuralPDE v4.3.0

    Diff since v4.2.0

    Closed issues:

    • what active fuction? (#465)

    Merged pull requests:

    • Support Inf Integrals (round 2) (#444) (@killah-t-cell)
    • CompatHelper: bump compat for SymbolicUtils to 0.19, (keep existing compat) (#464) (@github-actions[bot])
    Source code(tar.gz)
    Source code(zip)
  • v4.2.0(Dec 23, 2021)

    NeuralPDE v4.2.0

    Diff since v4.1.0

    Closed issues:

    • How to use GPU (#449)
    • how to set bound [-8,8][-8,8]? (#450)
    • Error on recompiling (#454)
    • CUDA: NAN (#463)

    Merged pull requests:

    • Formula modification (#455) (@NeuralPDE)
    • CompatHelper: bump compat for SymbolicUtils to 0.19, (keep existing compat) (#458) (@github-actions[bot])
    • CompatHelper: bump compat for ModelingToolkit to 8, (keep existing compat) (#461) (@github-actions[bot])
    Source code(tar.gz)
    Source code(zip)
  • v4.1.0(Dec 4, 2021)

    NeuralPDE v4.1.0

    Diff since v4.0.1

    Closed issues:

    • Neural adapter test is broken (#412)
    • First IDE test failed (#418)
    • NNPDENS and NNPDEHan tests are still failing (#420)
    • How to get the mathematical expression of Neural Network. (#439)
    • KeyError: key Differential(y) not found (#447)

    Merged pull requests:

    • Support for compound integrals (#409) (@killah-t-cell)
    • Fix first IDE test (#429) (@KirillZubov)
    • Improve integrating_depvars so it works more generally (#431) (@killah-t-cell)
    • CompatHelper: bump compat for ModelingToolkit to 7, (keep existing compat) (#433) (@github-actions[bot])
    • add parameterless_type_θ to get_phi doc (#438) (@killah-t-cell)
    • CI for LTS (#448) (@ChrisRackauckas)
    Source code(tar.gz)
    Source code(zip)
  • v4.0.1(Oct 31, 2021)

    NeuralPDE v4.0.1

    Diff since v4.0.0

    Closed issues:

    • Issue with 1D wave equation example (#327)
    • Neural adapter (#333)
    • Specifying a PhysicsInformedNN with variables in different dimensions (#339)
    • Can't integrate in infinite intervals (#386)
    • Error regarding complex function in boundary condition (MethodError: no method matching decompose(::Num)) (#387)
    • Support solving equations in a mesh (#389)
    • Broadcast integral calculation and increase performance (#390)
    • Error using build_loss_function and PDESystem when reproducing code (#394)
    • Update Zygote to last version (#397)
    • Example of integro-differential equation does not work (#403)
    • System of PDEs with CUDA? (#410)
    • NNPDENS tests are failing (#411)
    • Plotting a 6 dimensional problem as two 3 dimensional graphs? (#413)
    • Problem when running example in the Official Documentation due to HCubatureJL (#414)
    • Got "MethodError: no method matching PDESystem" after following copy pastable code from https://neuralpde.sciml.ai/dev/pinn/poisson/ (#416)

    Merged pull requests:

    • Split IDE tests (#391) (@KirillZubov)
    • Add heterogeneous test (#392) (@KirillZubov)
    • Increase integral performance with map (#393) (@killah-t-cell)
    • Added syntax highlighting for citation entry in README (#395) (@paniash)
    • update low_level.md (#396) (@KirillZubov)
    • CompatHelper: bump compat for SymbolicUtils to 0.16, (keep existing compat) (#402) (@github-actions[bot])
    • fix IDE doc (#404) (@killah-t-cell)
    • Update neural adapter tests with heterogeneous inputs (#405) (@KirillZubov)
    • Forward tests (#408) (@KirillZubov)
    Source code(tar.gz)
    Source code(zip)
  • v4.0.0(Sep 2, 2021)

    NeuralPDE v4.0.0

    Diff since v3.15.0

    Closed issues:

    • Error in the parser for multiplication between differentials (#234)
    • Question about Fokker-Planck example (Integral constraint) (#279)
    • ModelingToolkit @register function not defined (#353)
    • Trouble modelling wave equations (#356)
    • NeuralPDE.jl HJB example not working (#378)

    Merged pull requests:

    • Add Integro Differential Equations Support (#330) (@ashutosh-b-b)
    • Neural adapter (#336) (@KirillZubov)
    • LaTeXify equations in Docs/PINN Tutorials (#352) (@navidcy)
    • fix additional_loss (#362) (@KirillZubov)
    • Update optimal_stopping_american.md images and equations (#363) (@Vaibhavdixit02)
    • CompatHelper: bump compat for "ModelingToolkit" to "6" (#365) (@github-actions[bot])
    • ModelingToolkit" to "6" update (#366) (@KirillZubov)
    • Improve damped wave results in docs (#367) (@killah-t-cell)
    • Approximation function tests and update Fokker-Planck (#370) (@KirillZubov)
    • Add required name in PDESystem (#371) (@ChrisRackauckas)
    • CompatHelper: bump compat for "ModelingToolkit" to "6" (#373) (@github-actions[bot])
    • [WIP] ZDM / GB heterogeneous input (#374) (@killah-t-cell)
    • Update 2D gpu example (#377) (@KirillZubov)
    • CompatHelper: add new compat entry for SymbolicUtils at version 0.13, (keep existing compat) (#379) (@github-actions[bot])
    • CompatHelper: add new compat entry for Symbolics at version 3, (keep existing compat) (#380) (@github-actions[bot])
    • CompatHelper: add new compat entry for DomainSets at version 0.5, (keep existing compat) (#381) (@github-actions[bot])
    • Update IDE docs (#382) (@ChrisRackauckas)
    Source code(tar.gz)
    Source code(zip)
  • v3.15.0(Jul 25, 2021)

    NeuralPDE v3.15.0

    Diff since v3.14.0

    Closed issues:

    • Extract num of points for quadrature.jl methods (#322)
    • Examples with system of PDE (#332)
    • Broadcast piracy (#337)
    • LoadError: type GridTraining has no field points (#340)
    • Method Error: no method matching eps(::Type{Union{Nothing, Float32}}) (#346)

    Merged pull requests:

    • Update system.md (#335) (@killah-t-cell)
    • fix broadcast piracy (#338) (@KirillZubov)
    • More examples of systems of PDEs #332 (#341) (@killah-t-cell)
    • Flux v0.12.5 -> v0.12.4 (#345) (@KirillZubov)
    • Added damped wave example #327 (#347) (@killah-t-cell)
    • Fixed 1D damped wave example (#348) (@killah-t-cell)
    • Bump Flux (#351) (@ChrisRackauckas)
    Source code(tar.gz)
    Source code(zip)
  • v3.14.0(Jun 26, 2021)

    NeuralPDE v3.14.0

    Diff since v3.13.0

    Closed issues:

    • "Systems of PDEs" example in documentation does give the anticipated results (#302)
    • The loss function for each equation (#320)
    • kwarg isFloat32 (#323)
    • kwarg isGPU (#324)

    Merged pull requests:

    • The loss function for each equation (#321) (@KirillZubov)
    • Extract num of points for quadrature.jl methods (#326) (@KirillZubov)
    • Default quasi-random to Latin Hypercubes (#328) (@ChrisRackauckas)
    • Fix PDAE system example and derivative neural network approximation (#329) (@KirillZubov)
    • CompatHelper: bump compat for "GalacticOptim" to "2" (#334) (@github-actions[bot])
    Source code(tar.gz)
    Source code(zip)
  • v3.13.0(Jun 11, 2021)

  • v3.12.0(Jun 9, 2021)

    NeuralPDE v3.12.0

    Diff since v3.11.1

    Closed issues:

    • QuasiRandomTraining, every training step take a new quasirandom sample (#288)
    • Allow number of boundary points to be chosen by the user by a keyword argument (#296)
    • Stable and Dev version of Doc (#315)

    Merged pull requests:

    • Allow number of boundary points (#316) (@KirillZubov)
    • update ModelingToolkit v5.18.0 (#317) (@KirillZubov)
    • GPU fix (#318) (@KirillZubov)
    Source code(tar.gz)
    Source code(zip)
  • v3.11.1(Jun 1, 2021)

    NeuralPDE v3.11.1

    Diff since v3.11.0

    Closed issues:

    • Use FiniteDiffereneces.jl for the nested finite differencing (#215)
    • Example seems to have a missing Quadrature subclass (#308)
    • Possibility to define vector parameters with @parameters (#311)

    Merged pull requests:

    • QuasiRandomTraining, on every training iteration, it is generated a new quasirandom sample (#314) (@KirillZubov)
    Source code(tar.gz)
    Source code(zip)
  • v3.11.0(May 28, 2021)

    NeuralPDE v3.11.0

    Diff since v3.10.1

    Closed issues:

    • Why is phi([x,y],res.minimizer) an array? (#232)
    • Systems of one equation (#294)
    • How to specify parameters that depend on geometry in the PDE? (#301)
    • Error when trying to solve Poisson's equation with custom boundary conditions and layered dielectrics. (#303)
    • Demo failure (#305)
    • Bug in "1-D Burgers' Equation With Low-Level API" tutorial (#313)

    Merged pull requests:

    • CompatHelper: bump compat for "CUDA" to "3.0" (#293) (@github-actions[bot])
    • Systems of one equation (#295) (@KirillZubov)
    • defaults should be a keyword argument (#297) (@YingboMa)
    • CompatHelper: bump compat for "Distributions" to "0.25" (#304) (@github-actions[bot])
    • Demo failure (#306) (@KirillZubov)
    • Update Systems of PDEs docs (#307) (@KirillZubov)
    • Fix typo StochasticTraining (#310) (@KirillZubov)
    • Update 2D.md (#312) (@akashkgarg)
    Source code(tar.gz)
    Source code(zip)
  • v3.10.1(Apr 13, 2021)

  • v3.10.0(Apr 1, 2021)

    NeuralPDE v3.10.0

    Diff since v3.9.0

    Closed issues:

    • Deep learning for symbolic mathematics (#44)
    • question about pinn solver (#283)

    Merged pull requests:

    • CompatHelper: bump compat for "Flux" to "0.12" (#282) (@github-actions[bot])
    Source code(tar.gz)
    Source code(zip)
  • v3.9.0(Mar 27, 2021)

    NeuralPDE v3.9.0

    Diff since v3.8.2

    Closed issues:

    • Bug in PINN param_estim.md (#271)
    • Bug in example file (#275)

    Merged pull requests:

    • readme docstring (#269) (@anandijain)
    • CompatHelper: add new compat entry for "DocStringExtensions" at version "0.8" (#270) (@github-actions[bot])
    • update param estim (#272) (@KirillZubov)
    • Docs estim params (#273) (@KirillZubov)
    • Update readme.md (#278) (@KirillZubov)
    Source code(tar.gz)
    Source code(zip)
  • v3.8.2(Mar 18, 2021)

Owner
SciML Open Source Scientific Machine Learning
Open source software for scientific machine learning
SciML Open Source Scientific Machine Learning
A Streamlit component to render ECharts.

Streamlit - ECharts A Streamlit component to display ECharts. Install pip install streamlit-echarts Usage This library provides 2 functions to display

Fanilo Andrianasolo 290 Dec 30, 2022
This is a Pytorch implementation of paper: DropEdge: Towards Deep Graph Convolutional Networks on Node Classification

DropEdge: Towards Deep Graph Convolutional Networks on Node Classification This is a Pytorch implementation of paper: DropEdge: Towards Deep Graph Con

401 Dec 16, 2022
Image Segmentation Evaluation

Image Segmentation Evaluation Martin Keršner, [email protected] Evaluation

Martin Kersner 273 Oct 28, 2022
Binary Passage Retriever (BPR) - an efficient passage retriever for open-domain question answering

BPR Binary Passage Retriever (BPR) is an efficient neural retrieval model for open-domain question answering. BPR integrates a learning-to-hash techni

Studio Ousia 147 Dec 07, 2022
Implementation of Basic Machine Learning Algorithms on small datasets using Scikit Learn.

Basic Machine Learning Algorithms All the basic Machine Learning Algorithms are implemented in Python using libraries Acknowledgements Machine Learnin

Piyal Banik 47 Oct 16, 2022
"Segmenter: Transformer for Semantic Segmentation" reproduced via mmsegmentation

Segmenter-based-on-OpenMMLab "Segmenter: Transformer for Semantic Segmentation, arxiv 2105.05633." reproduced via mmsegmentation. We reproduce Segment

EricKani 22 Feb 24, 2022
Must-read Papers on Physics-Informed Neural Networks.

PINNpapers Contributed by IDRL lab. Introduction Physics-Informed Neural Network (PINN) has achieved great success in scientific computing since 2017.

IDRL 330 Jan 07, 2023
Face Mesh is a face geometry solution that estimates 468 3D face landmarks in real-time even on mobile devices

Face-Mesh Face Mesh is a face geometry solution that estimates 468 3D face landmarks in real-time even on mobile devices. It employs machine learning

Farnam Javadi 9 Dec 21, 2022
StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system

StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system, initially used for researching optimal incentive parameters for Liquidations 2.0.

Blockchain at Berkeley 52 Nov 21, 2022
This repository is the official implementation of the Hybrid Self-Attention NEAT algorithm.

This repository is the official implementation of the Hybrid Self-Attention NEAT algorithm. It contains the code to reproduce the results presented in the original paper: https://arxiv.org/abs/2112.0

Saman Khamesian 6 Dec 13, 2022
M2MRF: Many-to-Many Reassembly of Features for Tiny Lesion Segmentation in Fundus Images

M2MRF: Many-to-Many Reassembly of Features for Tiny Lesion Segmentation in Fundus Images This repo is the official implementation of paper "M2MRF: Man

12 Dec 14, 2022
Contrastive Learning for Compact Single Image Dehazing, CVPR2021

AECR-Net Contrastive Learning for Compact Single Image Dehazing, CVPR2021. Official Pytorch based implementation. Paper arxiv Pytorch Version TODO: mo

glassy 253 Jan 01, 2023
Open-source implementation of Google Vizier for hyper parameters tuning

Advisor Introduction Advisor is the hyper parameters tuning system for black box optimization. It is the open-source implementation of Google Vizier w

tobe 1.5k Jan 04, 2023
Temporally Coherent GAN SIGGRAPH project.

TecoGAN This repository contains source code and materials for the TecoGAN project, i.e. code for a TEmporally COherent GAN for video super-resolution

Duc Linh Nguyen 2 Jan 18, 2022
Reference PyTorch implementation of "End-to-end optimized image compression with competition of prior distributions"

PyTorch reference implementation of "End-to-end optimized image compression with competition of prior distributions" by Benoit Brummer and Christophe

Benoit Brummer 6 Jun 16, 2022
Learning What and Where to Draw

###Learning What and Where to Draw Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, Honglak Lee This is the code for our NIPS 201

Scott Ellison Reed 337 Nov 18, 2022
Magic tool for managing internet connection in local network by @zalexdev

Megacut ✂️ A new powerful Python3 tool for managing internet on a local network Installation git clone https://github.com/stryker-project/megacut cd m

Stryker 12 Dec 15, 2022
Deep Learning for Time Series Forecasting.

nixtlats:Deep Learning for Time Series Forecasting [nikstla] (noun, nahuatl) Period of time. State-of-the-art time series forecasting for pytorch. Nix

Nixtla 5 Dec 06, 2022
disentanglement_lib is an open-source library for research on learning disentangled representations.

disentanglement_lib disentanglement_lib is an open-source library for research on learning disentangled representation. It supports a variety of diffe

Google Research 1.3k Dec 28, 2022
Implementation of C-RNN-GAN.

Implementation of C-RNN-GAN. Publication: Title: C-RNN-GAN: Continuous recurrent neural networks with adversarial training Information: http://mogren.

Olof Mogren 427 Dec 25, 2022