Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PINNErrorVsTime Benchmark Updates #1159

Open
wants to merge 15 commits into
base: master
Choose a base branch
from

Conversation

ParamThakkar123
Copy link
Contributor

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

Add any other context about the problem here.

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas I got an error on running the iterations which said that the maxiters are less than 1000 so I set all maxiters to 1100. Actually the decision was a bit arbitrary but is that a good number ??

@ChrisRackauckas
Copy link
Member

https://docs.sciml.ai/SciMLBenchmarksOutput/v0.5/PINNErrorsVsTime/diffusion_et/

It's supposed to just show the error over time and then get cutoff. I don't see why making it longer would help.

@ParamThakkar123
Copy link
Contributor Author

ParamThakkar123 commented Jan 19, 2025

https://docs.sciml.ai/SciMLBenchmarksOutput/v0.5/PINNErrorsVsTime/diffusion_et/

It's supposed to just show the error over time and then get cutoff. I don't see why making it longer would help.

Yes. Actually I set it to that number just to get rid of that error

@ChrisRackauckas
Copy link
Member

Wait what's the error?

@ParamThakkar123
Copy link
Contributor Author

Wait what's the error?

the error went off on running again

@ChrisRackauckas
Copy link
Member

what error?

@ParamThakkar123
Copy link
Contributor Author

maxiters should be a number greater than 1000

@ChrisRackauckas
Copy link
Member

can you please just show the error...

@ParamThakkar123
Copy link
Contributor Author

AssertionError: maxiters for CubaCuhre(0, 0, 0) should be larger than 1000

Stacktrace: [1] __solvebp_call(prob::IntegralProblem{false, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}, NeuralPDE.var"#integrand#109"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p),
NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0xd9696f1d, 0xe356e73c, 0x32906e9c, 0x54a064bc, 0x0cbbe458), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{CubaCuhre, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}}, Vector{Float64}, @Kwargs{}}, alg::CubaCuhre,
sensealg::Integrals.ReCallVJP{Integrals.ZygoteVJP}, lb::Vector{Float64}, ub::Vector{Float64}, p::ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}; reltol::Float64, abstol::Float64, maxiters::Int64) @ IntegralsCuba C:\Users\Hp.julia\packages\IntegralsCuba\xueKH\src\IntegralsCuba.jl:139 [2] __solvebp_call @ C:\Users\Hp.julia\packages\IntegralsCuba\xueKH\src\IntegralsCuba.jl:134 [inlined] [3] #__solvebp_call#4 @ C:\Users\Hp.julia\packages\Integrals\d3rQd\src\common.jl:95 [inlined] [4] __solvebp_call @ C:\Users\Hp.julia\packages\Integrals\d3rQd\src\common.jl:94 [inlined] [5] #rrule#5 @ C:\Users\Hp.julia\packages\Integrals\d3rQd\ext\IntegralsZygoteExt.jl:17 [inlined] [6] rrule @ C:\Users\Hp.julia\packages\Integrals\d3rQd\ext\IntegralsZygoteExt.jl:14 [inlined] [7] rrule @ C:\Users\Hp.julia\packages\ChainRulesCore\U6wNx\src\rules.jl:144 [inlined] [8] chain_rrule_kw @ C:\Users\Hp.julia\packages\Zygote\TWpme\src\compiler\chainrules.jl:236 [inlined] [9] macro expansion @ C:\Users\Hp.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0 [inlined] [10] _pullback @ C:\Users\Hp.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:91 [inlined] [11] solve! @ C:\Users\Hp.julia\packages\Integrals\d3rQd\src\common.jl:84 [inlined] ... @ SciMLBase C:\Users\Hp.julia\packages\SciMLBase\szsYq\src\solve.jl:162 [55] solve(::OptimizationProblem{true, OptimizationFunction{true, AutoZygote, NeuralPDE.var"#full_loss_function#318"{NeuralPDE.var"#null_nonadaptive_loss#118", Vector{NeuralPDE.var"#106#110"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0ca831bf, 0x51abc8eb, 0xda1f388f, 0xf472bcea, 0x7492cfcb), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{CubaCuhre, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#105#108"{QuadratureTraining{CubaCuhre, Float64}}, Float64}}, Vector{NeuralPDE.var"#106#110"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#RGF_ModTag", (0xd9696f1d, 0xe356e73c, 0x32906e9c, 0x54a064bc, 0x0cbbe458), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{CubaCuhre, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#105#108"{QuadratureTraining{CubaCuhre, Float64}}, Float64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, @Kwargs{}}, ::Optimisers.Adam; kwargs::@Kwargs{callback::var"#11#18"{var"#loss_function#17"{OptimizationProblem{true, OptimizationFunction{true, AutoZygote, NeuralPDE.var"#full_loss_function#318"{NeuralPDE.var"#null_nonadaptive_loss#118", Vector{NeuralPDE.var"#74#75"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x700712a1, 0x6c1c91c2, 0xa5bfc01b, 0x66a91103, 0x0d12fcff), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Matrix{Real}}}, Vector{NeuralPDE.var"#74#75"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0xd9696f1d, 0xe356e73c, 0x32906e9c, 0x54a064bc, 0x0cbbe458), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Matrix{Real}}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, @Kwargs{}}}, Vector{Any}, Vector{Any}, Vector{Any}}, maxiters::Int64}) @ SciMLBase C:\Users\Hp.julia\packages\SciMLBase\szsYq\src\solve.jl:83 [56] allen_cahn(strategy::QuadratureTraining{CubaCuhre, Float64}, minimizer::Optimisers.Adam, maxIters::Int64) @ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W1sZmlsZQ==.jl:105

@ChrisRackauckas
Copy link
Member

I see, that's for the sampling algorithm. You should only need that on Cuhre?

@ParamThakkar123
Copy link
Contributor Author

I see, that's for the sampling algorithm. You should only need that on Cuhre?

Yes. But as Cuhre was the first one in the line I thought setting to 1100 just for it would not solve the problem, so I set it to 1100 for all of them

@ParamThakkar123
Copy link
Contributor Author

The CI has passed here. And all the code seems to run perfectly. Can you please review ??

@ChrisRackauckas
Copy link
Member

@ArnoStrouwen SciML/Integrals.jl#124 can you remind me what the purpose behind this was?

@ArnoStrouwen
Copy link
Member

I don't remember myself, but that PR links to:
SciML/Integrals.jl#47

@ChrisRackauckas
Copy link
Member

Uninitialized memory in the original C: giordano/Cuba.jl#12 (comment) fantastic stuff numerical community, that's your classic method that everyone says when they say "all of the old stuff is robust" 😅

@ChrisRackauckas
Copy link
Member

Can you force latest majors and make sure the manifest resolves?

@ParamThakkar123
Copy link
Contributor Author

I bump forced the latest versions and resolved manifests but initially there were a lot of version conflicts. I removed IntegralsCuba and IntegralsCubature for a while to resolve them. The manifest resolved but adding both of them back again poses some more version conflicts

@ChrisRackauckas
Copy link
Member

Can you share the resolution errors?

@ParamThakkar123
Copy link
Contributor Author

image

image

@ChrisRackauckas These are the resolution errors that occur

@ChrisRackauckas
Copy link
Member

Oh those were turned into extensions. Change using IntegralsCuba, IntegralsCubature into using Integrals, Cuba, Cubature and change the dependencies to directly depending on Cuba and Cubature.

@ParamThakkar123
Copy link
Contributor Author

Sure !! 🫡

@ParamThakkar123
Copy link
Contributor Author

Instead of Stochastic, QuasiRandom

Made the changes

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas while fixing diffusion_et.jmd I am constantly getting this error

MethodError: no method matching (::MLDataDevices.UnknownDevice)(::Matrix{Float64})

Stacktrace:
  [1] (::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})(x::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:42
  [2] (::NeuralPDE.var"#7#8")(cord::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:354
  [3] numeric_derivative(phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, u::NeuralPDE.var"#7#8", x::Matrix{Float64}, εs::Vector{Vector{Float64}}, order::Int64, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:384
  [4] macro expansion
    @ C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:130 [inlined]
  [5] macro expansion
    @ C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:163 [inlined]
  [6] macro expansion
    @ .\none:0 [inlined]
  [7] generated_callfunc(::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x7b9bebea, 0x6a5933cd, 0x4a5f5a8c, 0x91872721, 0xa2d91360), Expr}, ::Matrix{Float64}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::typeof(NeuralPDE.numeric_derivative), ::NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Vector{Float64}}}, ::NeuralPDE.var"#7#8", ::Nothing)
    @ NeuralPDE .\none:0
  [8] (::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x7b9bebea, 0x6a5933cd, 0x4a5f5a8c, 0x91872721, 0xa2d91360), Expr})(::Matrix{Float64}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::Function, ::Function, ::Function, ::Nothing)
    @ RuntimeGeneratedFunctions C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:150
  [9] (::NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x7b9bebea, 0x6a5933cd, 0x4a5f5a8c, 0x91872721, 0xa2d91360), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Vector{Float64}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing})(cord::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:150
 [10] (::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x7b9bebea, 0x6a5933cd, 0x4a5f5a8c, 0x91872721, 0xa2d91360), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Vector{Float64}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float64}})(θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\training_strategies.jl:70
 [11] (::NeuralPDE.var"#263#284"{Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}})(pde_loss_function::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x7b9bebea, 0x6a5933cd, 0x4a5f5a8c, 0x91872721, 0xa2d91360), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Vector{Float64}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float64}})
    @ NeuralPDE .\none:0
...
    @ SciMLBase C:\Users\Hp\.julia\packages\SciMLBase\Pma4a\src\solve.jl:95
 [22] diffusion(strategy::GridTraining{Vector{Float64}}, minimizer::Optimisers.Adam, maxIters::Int64)
    @ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W1sZmlsZQ==.jl:68
 [23] top-level scope
    @ e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W3sZmlsZQ==.jl:7

What can be the possible fixes for this ??

@ChrisRackauckas
Copy link
Member

If I knew I'd do it. It would take time to dig in that I don't have right now.

@avik-pal
Copy link
Member

  [1] (::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})(x::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:42
  [2] (::NeuralPDE.var"#7#8")(cord::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:354

see the Phi type, it is clearly incorrect. It can't contain optimisers leaf which would explain the UnknownDevice type.

You are passing in the optimization state as lux parameters which was deprecated in the last major release. You need to do state.u somewhere in the code

@ParamThakkar123
Copy link
Contributor Author

  [1] (::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})(x::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:42
  [2] (::NeuralPDE.var"#7#8")(cord::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:354

see the Phi type, it is clearly incorrect. It can't contain optimisers leaf which would explain the UnknownDevice type.

You are passing in the optimization state as lux parameters which was deprecated in the last major release. You need to do state.u somewhere in the code

Okay, got your point. This helped. Thanks a lot

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas I think the HCubatureJL algorithm implementation has some issues because when I replaced particularly this algorithm with GridTraining everything executed perfectly but when I executed the code with HCubatureJL() algorithm, it gave an AssertionError

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas All the benchmarks in PINNErrorVsTime have been updated and are working fine. Need your review on this.

@ChrisRackauckas
Copy link
Member

The allen cahn plot looks quite wonky?

@ParamThakkar123
Copy link
Contributor Author

Might be. I set it at 1100 epochs. Not sure about that but by reading various implementations, documentations, and your suggestions I somehow got this working 😅 . Any suggestions to improvise this implementation ?? I can look in to it

@ChrisRackauckas
Copy link
Member

I don't see why the x axis should be so large, no lines can be seen.

@ParamThakkar123
Copy link
Contributor Author

cb = function (p, l)
        try
            deltaT_s = time_ns()
            ctime = time_ns() - startTime - timeCounter
            
            push!(times, ctime / 1e9)
            push!(losses, l)
            push!(error, l)
            
            timeCounter += time_ns() - deltaT_s
            return false
        catch e
            @warn "Callback error: $e"
            return false
        end
    end

Probably due to this line ??

@ChrisRackauckas
Copy link
Member

Is there one point that is very large in time? Or is the axis being set directly?

@ParamThakkar123
Copy link
Contributor Author

ParamThakkar123 commented Feb 16, 2025

The axis is being set directly. Haven't predefined anything for time

@ParamThakkar123
Copy link
Contributor Author

ctime = time_ns() - startTime - timeCounter

Probably it is due to this ??

@ChrisRackauckas
Copy link
Member

What's the maximum in the array?

@ParamThakkar123
Copy link
Contributor Author

What's the maximum in the array?

Actually, it's calculating the value of times and adding it to the array. I haven't implemented the code to display it. To find that out I think I need to run the training code again, and it would take around 32 hours to complete training. Any faster method to see the maximum value ??

@ParamThakkar123
Copy link
Contributor Author

Should I consider running this again @ChrisRackauckas ??

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas
The maximum value in the times array is
66.796

This value comes from QuasiRandomTraining + BFGS algorithm

@ChrisRackauckas
Copy link
Member

It is just one time point? It's definitely an outlier. You got rid of the maximum time part: put that back?

@ParamThakkar123
Copy link
Contributor Author

No. It's an array of two values:
"11" => [66.5992, 66.796]

But I told you the maximum among the two. I haven't removed the maximum time part from the implementation. It's there but the whole code takes around 30 hours to execute for all the algorithms. I just ran this code for the most suspected outlier

@ParamThakkar123
Copy link
Contributor Author

Any suggestions on further changes on this implementation @ChrisRackauckas ??

@ParamThakkar123
Copy link
Contributor Author

Training with QuasiRandomTraining:
Dict{Any, Any} with 2 entries:
"12" => [9.60293, 10.3536]
"11" => [79.1825, 79.3383]

Training with QuadratureTraining CubaCuhre():
Dict{Any, Any} with 2 entries:
"12" => [0.0060337, 0.0161921]
"11" => [0.0056472, 0.0096335]

Training with Quadrature Training HCubatureJL()
Dict{Any, Any} with 2 entries:
"12" => [0.0048781, 0.0403565]
"11" => [0.0055059, 0.0097088]

Training Quadrature Training CubatureJLh()
Dict{Any, Any} with 2 entries:
"12" => [0.0099027, 0.0466078]
"11" => [0.0041709, 0.0073577]

Training with Quadrature Training CubaturwJLp()
Dict{Any, Any} with 2 entries:
"12" => [0.0048316, 0.292919]
"11" => [0.0078547, 0.0157797]

Training with GridTraining:
Dict{Any, Any} with 2 entries:
"12" => [0.0088805, 0.120971]
"11" => [0.0086563, 0.0147051]

Training with Stochastic Training:
Dict{Any, Any} with 2 entries:
"12" => [0.0083882, 0.0354763]
"11" => [0.0110269, 0.0161864]

@ParamThakkar123
Copy link
Contributor Author

ParamThakkar123 commented Feb 19, 2025

@ChrisRackauckas Sorry for bothering you so much despite your busy schedule and this project being on a lower priority but on my end I think this is done because I updated the benchmark codes so that they work as usual. I have studied differential equations, PINNs, etc. but I am yet to explore this one so I am not sure whether this output is satisfactory to you. Your suggestions would be greatly helpful for me to be on a right trajectory so that we reach a satisfactory output.

Also as per the website there are only 2 days left for my claim on this project under the small grants. And this particular piece is the last one mentioned in the list. I am open to contributing to SciML even out of small grants but as of now this is the last one left

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants