-
-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PINNErrorVsTime Benchmark Updates #1159
base: master
Are you sure you want to change the base?
Conversation
@ChrisRackauckas I got an error on running the iterations which said that the maxiters are less than 1000 so I set all maxiters to 1100. Actually the decision was a bit arbitrary but is that a good number ?? |
https://docs.sciml.ai/SciMLBenchmarksOutput/v0.5/PINNErrorsVsTime/diffusion_et/ It's supposed to just show the error over time and then get cutoff. I don't see why making it longer would help. |
Yes. Actually I set it to that number just to get rid of that error |
Wait what's the error? |
the error went off on running again |
what error? |
maxiters should be a number greater than 1000 |
can you please just show the error... |
|
I see, that's for the sampling algorithm. You should only need that on Cuhre? |
Yes. But as Cuhre was the first one in the line I thought setting to 1100 just for it would not solve the problem, so I set it to 1100 for all of them |
The CI has passed here. And all the code seems to run perfectly. Can you please review ?? |
@ArnoStrouwen SciML/Integrals.jl#124 can you remind me what the purpose behind this was? |
I don't remember myself, but that PR links to: |
Uninitialized memory in the original C: giordano/Cuba.jl#12 (comment) fantastic stuff numerical community, that's your classic method that everyone says when they say "all of the old stuff is robust" 😅 |
Can you force latest majors and make sure the manifest resolves? |
I bump forced the latest versions and resolved manifests but initially there were a lot of version conflicts. I removed IntegralsCuba and IntegralsCubature for a while to resolve them. The manifest resolved but adding both of them back again poses some more version conflicts |
Can you share the resolution errors? |
@ChrisRackauckas These are the resolution errors that occur |
Oh those were turned into extensions. Change |
Sure !! 🫡 |
Made the changes |
@ChrisRackauckas while fixing diffusion_et.jmd I am constantly getting this error MethodError: no method matching (::MLDataDevices.UnknownDevice)(::Matrix{Float64})
Stacktrace:
[1] (::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})(x::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:42
[2] (::NeuralPDE.var"#7#8")(cord::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:354
[3] numeric_derivative(phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, u::NeuralPDE.var"#7#8", x::Matrix{Float64}, εs::Vector{Vector{Float64}}, order::Int64, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:384
[4] macro expansion
@ C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:130 [inlined]
[5] macro expansion
@ C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:163 [inlined]
[6] macro expansion
@ .\none:0 [inlined]
[7] generated_callfunc(::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x7b9bebea, 0x6a5933cd, 0x4a5f5a8c, 0x91872721, 0xa2d91360), Expr}, ::Matrix{Float64}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::typeof(NeuralPDE.numeric_derivative), ::NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Vector{Float64}}}, ::NeuralPDE.var"#7#8", ::Nothing)
@ NeuralPDE .\none:0
[8] (::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x7b9bebea, 0x6a5933cd, 0x4a5f5a8c, 0x91872721, 0xa2d91360), Expr})(::Matrix{Float64}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::Function, ::Function, ::Function, ::Nothing)
@ RuntimeGeneratedFunctions C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:150
[9] (::NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x7b9bebea, 0x6a5933cd, 0x4a5f5a8c, 0x91872721, 0xa2d91360), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Vector{Float64}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing})(cord::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:150
[10] (::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x7b9bebea, 0x6a5933cd, 0x4a5f5a8c, 0x91872721, 0xa2d91360), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Vector{Float64}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float64}})(θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\training_strategies.jl:70
[11] (::NeuralPDE.var"#263#284"{Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:30, Axis(weight = ViewAxis(1:20, ShapedAxis((10, 2))), bias = ViewAxis(21:30, Shaped1DAxis((10,))))), layer_2 = ViewAxis(31:140, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(141:151, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}})(pde_loss_function::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x7b9bebea, 0x6a5933cd, 0x4a5f5a8c, 0x91872721, 0xa2d91360), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Vector{Float64}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float64}})
@ NeuralPDE .\none:0
...
@ SciMLBase C:\Users\Hp\.julia\packages\SciMLBase\Pma4a\src\solve.jl:95
[22] diffusion(strategy::GridTraining{Vector{Float64}}, minimizer::Optimisers.Adam, maxIters::Int64)
@ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W1sZmlsZQ==.jl:68
[23] top-level scope
@ e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W3sZmlsZQ==.jl:7 What can be the possible fixes for this ?? |
If I knew I'd do it. It would take time to dig in that I don't have right now. |
see the Phi type, it is clearly incorrect. It can't contain optimisers leaf which would explain the UnknownDevice type. You are passing in the optimization state as lux parameters which was deprecated in the last major release. You need to do |
Okay, got your point. This helped. Thanks a lot |
@ChrisRackauckas I think the |
@ChrisRackauckas All the benchmarks in PINNErrorVsTime have been updated and are working fine. Need your review on this. |
The allen cahn plot looks quite wonky? |
Might be. I set it at 1100 epochs. Not sure about that but by reading various implementations, documentations, and your suggestions I somehow got this working 😅 . Any suggestions to improvise this implementation ?? I can look in to it |
I don't see why the x axis should be so large, no lines can be seen. |
cb = function (p, l)
try
deltaT_s = time_ns()
ctime = time_ns() - startTime - timeCounter
push!(times, ctime / 1e9)
push!(losses, l)
push!(error, l)
timeCounter += time_ns() - deltaT_s
return false
catch e
@warn "Callback error: $e"
return false
end
end Probably due to this line ?? |
Is there one point that is very large in time? Or is the axis being set directly? |
The axis is being set directly. Haven't predefined anything for time |
Probably it is due to this ?? |
What's the maximum in the array? |
Actually, it's calculating the value of times and adding it to the array. I haven't implemented the code to display it. To find that out I think I need to run the training code again, and it would take around 32 hours to complete training. Any faster method to see the maximum value ?? |
Should I consider running this again @ChrisRackauckas ?? |
@ChrisRackauckas This value comes from QuasiRandomTraining + BFGS algorithm |
It is just one time point? It's definitely an outlier. You got rid of the maximum time part: put that back? |
No. It's an array of two values: But I told you the maximum among the two. I haven't removed the maximum time part from the implementation. It's there but the whole code takes around 30 hours to execute for all the algorithms. I just ran this code for the most suspected outlier |
Any suggestions on further changes on this implementation @ChrisRackauckas ?? |
Training with QuasiRandomTraining: Training with QuadratureTraining CubaCuhre(): Training with Quadrature Training HCubatureJL() Training Quadrature Training CubatureJLh() Training with Quadrature Training CubaturwJLp() Training with GridTraining: Training with Stochastic Training: |
@ChrisRackauckas Sorry for bothering you so much despite your busy schedule and this project being on a lower priority but on my end I think this is done because I updated the benchmark codes so that they work as usual. I have studied differential equations, PINNs, etc. but I am yet to explore this one so I am not sure whether this output is satisfactory to you. Your suggestions would be greatly helpful for me to be on a right trajectory so that we reach a satisfactory output. Also as per the website there are only 2 days left for my claim on this project under the small grants. And this particular piece is the last one mentioned in the list. I am open to contributing to SciML even out of small grants but as of now this is the last one left |
Checklist
contributor guidelines, in particular the SciML Style Guide and
COLPRAC.
Additional context
Add any other context about the problem here.