Skip to content

New loss #937

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 30 commits into
base: master
Choose a base branch
from
Open

New loss #937

wants to merge 30 commits into from

Conversation

AstitvaAggarwal
Copy link
Contributor

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

Add any other context about the problem here.

@AstitvaAggarwal AstitvaAggarwal changed the title Complete Constrained PINNS, BPINNs. New loss Apr 14, 2025
@AstitvaAggarwal
Copy link
Contributor Author

@ChrisRackauckas i think there are some compat issues in the Downgrade Test env.

@ChrisRackauckas
Copy link
Member

Yes don't worry about downgrade

@AstitvaAggarwal
Copy link
Contributor Author

AstitvaAggarwal commented May 5, 2025

@ChrisRackauckas the following PR adds the new loss for NNODE and BNNODE (with appropriate tests) and corrects tests erroring out in BPINN_PDE_tests.jl (this started once the repo was completely overhauled). Just to give some insight, BPINN model performance for just 20 training points in t=(0,4) as in the tests added where we solve LV :
image

u2 is our new model.

src/ode_solve.jl Outdated
Comment on lines 301 to 303
# Quadrature is applied on timewise losses
# Gridtraining/trapezoidal rule quadrature_weights is dt.*ones(T, length(t))
return sum(sum(abs2, loss_vals[i, :] .* quadrature_weights) for i in 1:n_output)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this different from the quadrature loss?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason to not cubature?

Copy link
Contributor Author

@AstitvaAggarwal AstitvaAggarwal May 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its pretty much a quadrature loss but we dont create an Integralproblem as we have a fixed set of precomputed points to get a loss function. Not using HCubature here as again we cant compute f(x) at arbitrary domain points (so its not exactly h or p adaptive integration)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But why not combine the two loss calculations and cubature it? That would be strictly faster convergence?

Copy link
Contributor Author

@AstitvaAggarwal AstitvaAggarwal May 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ohh okay, missed that, thanks !

@AstitvaAggarwal
Copy link
Contributor Author

@ChrisRackauckas GTM?

@@ -6,7 +6,7 @@
dataset <: Union{Vector{Nothing}, Vector{<:Vector{<:AbstractFloat}}}
priors <: Vector{<:Distribution}
phystd::Vector{Float64}
phynewstd::Vector{Float64}
phynewstd::Function
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Specialize?

@@ -91,14 +91,17 @@ Networks 9, no. 5 (1998): 987-1000.
strategy <: Union{Nothing, AbstractTrainingStrategy}
param_estim
additional_loss <: Union{Nothing, Function}
dataset <: Union{Vector{Nothing}, Vector{<:Vector{<:AbstractFloat}}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use an empty vector as no dataset to get type stable.

@@ -91,14 +91,17 @@ Networks 9, no. 5 (1998): 987-1000.
strategy <: Union{Nothing, AbstractTrainingStrategy}
param_estim
additional_loss <: Union{Nothing, Function}
dataset <: Union{Vector{Nothing}, Vector{<:Vector{<:AbstractFloat}}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to document in the docstring the required dataset form and the new kwargs


@testitem "ODE Parameter Estimation Improvement" tags=[:nnode] begin
using OrdinaryDiffEq, Random, Lux, OptimizationOptimJL, LineSearches
using FastGaussQuadrature, PolyChaos, Integrals
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
using FastGaussQuadrature, PolyChaos, Integrals
using FastGaussQuadrature

@@ -92,6 +93,7 @@ Optimization = "4"
OptimizationOptimJL = "0.4"
OptimizationOptimisers = "0.3"
OrdinaryDiffEq = "6.87"
PolyChaos = "0.2.11"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
PolyChaos = "0.2.11"

@@ -126,10 +129,11 @@ LuxLib = "82251201-b29d-42c6-8e01-566dec8acb11"
MethodOfLines = "94925ecb-adb7-4558-8ed8-f975c56a0bf4"
OptimizationOptimJL = "36348300-93cb-4f02-beb5-3c3902f8871e"
OrdinaryDiffEq = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed"
PolyChaos = "8d666b04-775d-5f6e-b778-5ac7c70f65a3"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
PolyChaos = "8d666b04-775d-5f6e-b778-5ac7c70f65a3"

ReTestItems = "817f1d60-ba6b-4fd5-9520-3cf149f6a823"
StochasticDiffEq = "789caeaf-c7a9-5a7d-9973-96adeb23e2a0"
TensorBoardLogger = "899adc3e-224a-11e9-021f-63837185c80f"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"

[targets]
test = ["Aqua", "CUDA", "DiffEqNoiseProcess", "ExplicitImports", "Flux", "Hwloc", "InteractiveUtils", "LineSearches", "LuxCUDA", "LuxCore", "LuxLib", "MethodOfLines", "OptimizationOptimJL", "OrdinaryDiffEq", "ReTestItems", "StochasticDiffEq", "TensorBoardLogger", "Test"]
test = ["Aqua", "CUDA", "DiffEqNoiseProcess", "ExplicitImports", "FastGaussQuadrature", "Flux", "Hwloc", "InteractiveUtils", "LineSearches", "LuxCUDA", "LuxCore", "LuxLib", "MethodOfLines", "OptimizationOptimJL", "OrdinaryDiffEq", "PolyChaos", "ReTestItems", "StochasticDiffEq", "TensorBoardLogger", "Test"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
test = ["Aqua", "CUDA", "DiffEqNoiseProcess", "ExplicitImports", "FastGaussQuadrature", "Flux", "Hwloc", "InteractiveUtils", "LineSearches", "LuxCUDA", "LuxCore", "LuxLib", "MethodOfLines", "OptimizationOptimJL", "OrdinaryDiffEq", "PolyChaos", "ReTestItems", "StochasticDiffEq", "TensorBoardLogger", "Test"]
test = ["Aqua", "CUDA", "DiffEqNoiseProcess", "ExplicitImports", "FastGaussQuadrature", "Flux", "Hwloc", "InteractiveUtils", "LineSearches", "LuxCUDA", "LuxCore", "LuxLib", "MethodOfLines", "OptimizationOptimJL", "OrdinaryDiffEq", "ReTestItems", "StochasticDiffEq", "TensorBoardLogger", "Test"]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants