-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Harmonic Oscillator Decapode Calibration #177
Comments
@jpfairbanks mentioned today at the PI kickoff meeting that maybe the best way forward is to use SciMLStructures to set the desired Decapode parameters to be tunable. I also tried Enzyme, did not work. @ChrisRackauckas |
What is the error you get from Enzyme?
Yes, though note that will need to use the PR branch SciML/SciMLSensitivity.jl#1057 until it merges (hopefully in the next week or so) |
inside of the remake call. |
Can we MWE this more to not include Decapodes? |
This is a version that uses a function that was originally generated by Decapodes, but has any Decapode stuff and CombinatorialSpaces stuff stripped out, but gives the same results when solved. Trying to autodiff gives roughly the same errors as before.
Versions:
|
Errors:
Enzyme:
|
I just realized that in the original version where the gradient worked, the parameters had to be a ComponentArray. However the Decapodes folks use NamedTuples for their parameters. ComponentArrays seem to work to simulate anyway though. Even with the parameter as a component array, this example still fails.
|
We could use ComponentArrays for Parameters if necessary. We just need to be able to index by symbols and store heterogeneous collections |
Can you isolate this down to just |
You mean something like this?
Zygote error:
Enzyme error:
|
using ComponentArrays
using FiniteDiff
using Enzyme
not_decapode_f = begin
function simulate()
begin
var"__•1" = Vector{Float64}(undef, 1)
__V̇ = Vector{Float64}(undef, 1)
end
f(du, u, p, t) = begin
begin
X = u.X
k = p.k
V = u.V
end
var"•1" = var"__•1"
V̇ = __V̇
var"•1" .= (.-)(k)
V̇ .= var"•1" .* X
getproperty(du, :X) .= V
getproperty(du, :V) .= V̇
nothing
end
end
end
du = ComponentArray(X = [0.0], V = [0.0])
u = ComponentArray(X = [1.0], V = [3.0])
p = ComponentArray(k = 1.0)
f = not_decapode_f()
f(du,u,p,1.0)
df = Enzyme.make_zero(f)
d_du = Enzyme.make_zero(du)
d_u = Enzyme.make_zero(u)
dp = Enzyme.make_zero(p)
d_du .= 0; d_u .= 0; dp[1] = 1.0
Enzyme.autodiff(Enzyme.Forward, Duplicated(f, df), Enzyme.Duplicated(du, d_du),
Enzyme.Duplicated(u,d_u), Enzyme.Duplicated(p,dp),
Enzyme.Const(1.0))
d_du # ComponentVector{Float64}(X = [0.0], V = [-1.0])
dp # ComponentVector{Float64}(k = 1.0)
du # ComponentVector{Float64}(X = [3.0], V = [-1.0])
d_du .= 0; d_u .= 0; dp[1] = 1.0
Enzyme.autodiff(Enzyme.Reverse, Duplicated(f, df), Enzyme.Duplicated(du, d_du),
Enzyme.Duplicated(u,d_u), Enzyme.Duplicated(p,dp),
Enzyme.Const(1.0))
d_du # ComponentVector{Float64}(X = [0.0], V = [0.0])
dp # ComponentVector{Float64}(k = 3.0)
du # ComponentVector{Float64}(X = [3.0], V = [-1.0])
# Confirm with finite difference
ppet = ComponentArray(k = 1 + 1e-7)
du2 = copy(du)
f(du,u,p,1.0)
f(du2,u,ppet,1.0)
(du2 - du) ./ 1e-7 # ComponentVector{Float64}(X = [0.0], V = [-1.0000000005838672]) Interesting issue, but target this form. |
Well the PreallocationTools is still used for the ForwardDiff in the ODE solver. But the reverse mode shouldn't care. |
using Enzyme
not_decapode_f = begin
function simulate()
begin
var"__•1" = Vector{Float64}(undef, 1)
__V̇ = Vector{Float64}(undef, 1)
end
f(du, u, p, t) = begin
begin
X = u[1]
k = p[1]
V = u[2]
end
var"•1" = var"__•1"
V̇ = __V̇
var"•1" .= (.-)(k)
V̇ .= var"•1" .* X
du[1] = V
du[2] = V̇[1]
nothing
end
end
end
du = [0.0,0.0]
u = [1.0,3.0]
p = [1.0]
f = not_decapode_f()
f(du,u,p,1.0)
df = Enzyme.make_zero(f)
d_du = Enzyme.make_zero(du)
d_u = Enzyme.make_zero(u)
dp = Enzyme.make_zero(p)
df = Enzyme.make_zero(f)
d_du .= 0; d_u .= 0; dp[1] = 1.0
Enzyme.autodiff(Enzyme.Reverse, Duplicated(f, df), Enzyme.Duplicated(du, d_du),
Enzyme.Duplicated(u,d_u), Enzyme.Duplicated(p,dp),
Enzyme.Const(1.0))
@show d_du # [0.0,0.0]
@show dp # [3.0]
@show du # [3.0,-1.0]
df = Enzyme.make_zero(f)
# compute the gradient wrt d_u[1]
d_du = [0.0, 1.0]; d_u .= 0; dp[1] = 0.0
Enzyme.autodiff(Enzyme.Reverse, Duplicated(f, df), Enzyme.Duplicated(du, d_du),
Enzyme.Duplicated(u,d_u), Enzyme.Duplicated(p,dp),
Enzyme.Const(1.0))
# derivative of d_u[1] / dp, which is what finite differences computes below [in the second term].
@show dp # dp = [-1.0]
# Confirm with finite difference
ppet = [1 + 1e-7]
du2 = copy(du)
f(du,u,p,1.0)
f(du2,u,ppet,1.0)
@show (du2 - du) ./ 1e-7 # [0.0,-1.0000000005838672] |
whoops commented in the wrong issue. but in any case @ChrisRackauckas your issue here was calling Enzyme incorrectly. In reverse mode you need to set the shadow of the return to 1, not the shadow of the input. The second case above shows the correct result when called properly |
Yup, dope. So then this is exactly what's needed in the context of the real model: using Enzyme, PreallocationTools, ComponentArrays
not_decapode_f = begin
function simulate()
begin
var"__•1" = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
__V̇ = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
end
f(du, u, p, t) = begin
begin
X = u.X
k = p.k
V = u.V
end
var"•1" = PreallocationTools.get_tmp(var"__•1", u)
V̇ = PreallocationTools.get_tmp(__V̇, u)
var"•1" .= (.-)(k)
V̇ .= var"•1" .* X
getproperty(du, :X) .= V
getproperty(du, :V) .= V̇
nothing
end
end
end
du = ComponentArray(X = [0.0], V = [0.0])
u = ComponentArray(X = [1.0], V = [3.0])
p = ComponentArray(k = 1.0)
f = not_decapode_f()
f(du,u,p,1.0)
df = Enzyme.make_zero(f)
d_du = Enzyme.make_zero(du)
d_u = Enzyme.make_zero(u)
dp = Enzyme.make_zero(p)
d_du .= 0; d_u .= 0; dp[1] = 1.0
Enzyme.autodiff(Enzyme.Forward, Duplicated(f, df), Enzyme.Duplicated(du, d_du),
Enzyme.Duplicated(u,d_u), Enzyme.Duplicated(p,dp),
Enzyme.Const(1.0))
d_du # ComponentVector{Float64}(X = [0.0], V = [-1.0])
dp # ComponentVector{Float64}(k = 1.0)
du # ComponentVector{Float64}(X = [3.0], V = [-1.0])
d_du .= 0; d_u .= 0; dp[1] = 0.0
d_du = ComponentArray(X = [0.0], V = [1.0])
Enzyme.autodiff(Enzyme.Reverse, Duplicated(f, df), Enzyme.Duplicated(du, d_du),
Enzyme.Duplicated(u,d_u), Enzyme.Duplicated(p,dp),
Enzyme.Const(1.0))
d_du # ComponentVector{Float64}(X = [0.0], V = [0.0])
dp # ComponentVector{Float64}(k = 3.0)
du # ComponentVector{Float64}(X = [3.0], V = [-1.0])
# Confirm with finite difference
ppet = ComponentArray(k = 1 + 1e-7)
du2 = copy(du)
f(du,u,p,1.0)
f(du2,u,ppet,1.0)
(du2 - du) ./ 1e-7 # ComponentVector{Float64}(X = [0.0], V = [-1.0000000005838672]) which works on my machine. @jClugstor can you make our code target that? |
To be clear you don't need to reallocate a new array for d_du you just need
to set the values to 0 except 1 where you want the derivatives
…On Fri, Jun 7, 2024, 2:03 AM Christopher Rackauckas < ***@***.***> wrote:
Yup, dope.
So then this is exactly what's needed in the context of the real model:
using Enzyme, PreallocationTools, ComponentArrays
not_decapode_f = begin
function simulate()
begin
var"__•1" = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
__V̇ = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
end
f(du, u, p, t) = begin
begin
X = u.X
k = p.k
V = u.V
end
var"•1" = PreallocationTools.get_tmp(var"__•1", u)
V̇ = PreallocationTools.get_tmp(__V̇, u)
var"•1" .= (.-)(k)
V̇ .= var"•1" .* X
getproperty(du, :X) .= V
getproperty(du, :V) .= V̇
nothing
end
endend
du = ComponentArray(X = [0.0], V = [0.0])
u = ComponentArray(X = [1.0], V = [3.0])
p = ComponentArray(k = 1.0)
f = not_decapode_f()f(du,u,p,1.0)
df = Enzyme.make_zero(f)
d_du = Enzyme.make_zero(du)
d_u = Enzyme.make_zero(u)
dp = Enzyme.make_zero(p)
d_du .= 0; d_u .= 0; dp[1] = 1.0
Enzyme.autodiff(Enzyme.Forward, Duplicated(f, df), Enzyme.Duplicated(du, d_du),
Enzyme.Duplicated(u,d_u), Enzyme.Duplicated(p,dp),
Enzyme.Const(1.0))
d_du # ComponentVector{Float64}(X = [0.0], V = [-1.0])
dp # ComponentVector{Float64}(k = 1.0)
du # ComponentVector{Float64}(X = [3.0], V = [-1.0])
d_du .= 0; d_u .= 0; dp[1] = 0.0
d_du = ComponentArray(X = [0.0], V = [1.0])
Enzyme.autodiff(Enzyme.Reverse, Duplicated(f, df), Enzyme.Duplicated(du, d_du),
Enzyme.Duplicated(u,d_u), Enzyme.Duplicated(p,dp),
Enzyme.Const(1.0))
d_du # ComponentVector{Float64}(X = [0.0], V = [0.0])
dp # ComponentVector{Float64}(k = 3.0)
du # ComponentVector{Float64}(X = [3.0], V = [-1.0])
# Confirm with finite difference
ppet = ComponentArray(k = 1 + 1e-7)
du2 = copy(du)f(du,u,p,1.0)f(du2,u,ppet,1.0)
(du2 - du) ./ 1e-7 # ComponentVector{Float64}(X = [0.0], V = [-1.0000000005838672])
which works on my machine. @jClugstor <https://github.com/jClugstor> can
you make our code target that?
—
Reply to this email directly, view it on GitHub
<#177 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJTUXE34VNCD2SW66TYAY3ZGD2FJAVCNFSM6AAAAABIR6QLPWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNJTGYYDIOBRGE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Yeah this is just easiest to write. Note that SciMLSensitivity probably needs the |
The only thing preventing the Decapode generated code from being autodiffed with Enzyme now seems to be that the generated function needs to return |
Just add it in there. https://github.com/AlgebraicJulia/Decapodes.jl/blob/main/src/simulation.jl#L565-L576 It's one line right there. |
Oh cool thanks. In the meantime I just put it there manually. cd(@__DIR__)
using Pkg
Pkg.activate(".")
Pkg.instantiate()
using Catlab
using Catlab.Graphics
using CombinatorialSpaces
using ComponentArrays
using Catlab.Graphics
using CombinatorialSpaces.ExteriorCalculus
using DiagrammaticEquations
using DiagrammaticEquations.Deca
using Decapodes
using GeometryBasics:Point1,Point2,Point3
using OrdinaryDiffEq
using Enzyme
using SciMLSensitivity
Point1D = Point1{Float64}
Point2D = Point2{Float64};
Point3D = Point3{Float64};
decapode_f = begin
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:570 =#
function simulate(mesh, operators, hodge = GeometricHodge())
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:570 =#
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:571 =#
begin
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:174 =#
end
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:572 =#
begin
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:468 =#
end
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:573 =#
begin
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:227 =#
var"__•1" = Decapodes.FixedSizeDiffCache(Vector{Float64}(undef, nparts(mesh, :V)))
__V̇ = Decapodes.FixedSizeDiffCache(Vector{Float64}(undef, nparts(mesh, :V)))
end
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:574 =#
f(du, u, p, t) = begin
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:574 =#
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:575 =#
begin
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:252 =#
X = u.X
k = p.k
V = u.V
end
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:576 =#
var"•1" = Decapodes.get_tmp(var"__•1", u)
V̇ = Decapodes.get_tmp(__V̇, u)
var"•1" .= (.-)(k)
V̇ .= var"•1" .* X
#= /home/jadonclugston/.julia/packages/Decapodes/qxJAY/src/simulation.jl:577 =#
getproperty(du, :X) .= V
getproperty(du, :V) .= V̇
nothing
end
end
end
x0 = [1.0]
v0 = [0.0]
u₀ = ComponentArray(X = x0, V = v0)
p = ComponentArray(k = 1.0)
tₑ = 10
s_prime = EmbeddedDeltaSet1D{Bool, Point1D}()
add_vertices!(s_prime,1)
s = EmbeddedDeltaDualComplex1D{Bool, Float64, Point1D}(s_prime)
function generate(operators,mesh)
end
f = decapode_f(s, generate)
#Enzyme autodiff f test ============================================================================================================================
du = ComponentArray(X = [0.0], V = [0.0])
u = ComponentArray(X = [1.0], V = [3.0])
p = ComponentArray(k = 1.0)
f(du,u,p,1.0)
df = Enzyme.make_zero(f)
d_du = Enzyme.make_zero(du)
d_u = Enzyme.make_zero(u)
dp = Enzyme.make_zero(p)
d_du .= 0; d_u .= 0; dp[1] = 1.0
Enzyme.autodiff(Enzyme.Forward, Duplicated(f, df), Enzyme.Duplicated(du, d_du),
Enzyme.Duplicated(u,d_u), Enzyme.Duplicated(p,dp),
Enzyme.Const(1.0))
d_du # ComponentVector{Float64}(X = [0.0], V = [-1.0])
dp # ComponentVector{Float64}(k = 1.0)
du # ComponentVector{Float64}(X = [3.0], V = [-1.0])
d_du .= 0; d_u .= 0; dp[1] = 0.0
d_du = ComponentArray(X = [0.0], V = [1.0])
Enzyme.autodiff(Enzyme.Reverse, Duplicated(f, df), Enzyme.Duplicated(du, d_du),
Enzyme.Duplicated(u,d_u), Enzyme.Duplicated(p,dp),
Enzyme.Const(1.0))
d_du # ComponentVector{Float64}(X = [0.0], V = [0.0])
dp # ComponentVector{Float64}(k = 3.0)
du # ComponentVector{Float64}(X = [3.0], V = [-1.0])
# ===================================================================================================================================================
data_prob = ODEProblem{true, SciMLBase.FullSpecialize}(f, u₀, (0, tₑ),p)
sol = solve(data_prob,Tsit5(), saveat = 0.1)
dat = vcat([u.X for u in sol.u]...)
function fake_loss(p)
prob = remake(data_prob, p = p)
soln = solve(prob, FBDF(autodiff = false), sensealg = EnzymeVJP())
@info("Done")
# return soln(tₑ)
sum(last(soln)) # last, not soln(tₑ) because to avoid interpolation fails when AD fails.
end
Enzyme.gradient(Enzyme.Reverse, fake_loss, p)
Looks like autodiff through |
You only need Enzyme for the inside. Just do Zygote on the outside. |
Do you mean function fake_loss(p)
prob = remake(data_prob, p = p)
soln = solve(prob, FBDF(autodiff = false), sensealg = EnzymeVJP())
@info("Done")
# return soln(tₑ)
sum(last(soln)) # last, not soln(tₑ) because to avoid interpolation fails when AD fails.
end
Zygote.gradient(fake_loss, p)
|
Make this print out the args. |
Looks like there are only 47 args where 49 is expected |
This is a pretty small example that demonstrates the Zygote error using Enzyme, PreallocationTools, ComponentArrays
not_decapode_f = begin
function simulate()
begin
var"__•1" = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
__V̇ = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
end
f(du, u, p, t) = begin
begin
X = u.X
k = p.k
V = u.V
end
var"•1" = PreallocationTools.get_tmp(var"__•1", u)
V̇ = PreallocationTools.get_tmp(__V̇, u)
var"•1" .= (.-)(k)
V̇ .= var"•1" .* X
getproperty(du, :X) .= V
getproperty(du, :V) .= V̇
nothing
end
end
end
du = ComponentArray(X = [0.0], V = [0.0])
u = ComponentArray(X = [1.0], V = [3.0])
p = ComponentArray(k = 1.0)
f = not_decapode_f()
data_prob = ODEProblem{true, SciMLBase.FullSpecialize}(f, u₀, (0, tₑ),p)
sol = solve(data_prob,Tsit5(), saveat = 0.1)
dat = vcat([u.X for u in sol.u]...)
function fake_loss(p)
prob = remake(data_prob, p = p)
soln = solve(prob, FBDF(autodiff = false), sensealg = EnzymeVJP())
@info("Done")
# return soln(tₑ)
sum(last(soln)) # last, not soln(tₑ) because to avoid interpolation fails when AD fails.
end
Zygote.gradient(fake_loss,p)
|
using Enzyme, PreallocationTools, ComponentArrays, Zygote, OrdinaryDiffEq, SciMLSensitivity
not_decapode_f = begin
function simulate()
begin
var"__•1" = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
__V̇ = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
end
f(du, u, p, t) = begin
begin
X = u.X
k = p.k
V = u.V
end
var"•1" = PreallocationTools.get_tmp(var"__•1", u)
V̇ = PreallocationTools.get_tmp(__V̇, u)
var"•1" .= (.-)(k)
V̇ .= var"•1" .* X
getproperty(du, :X) .= V
getproperty(du, :V) .= V̇
nothing
end
end
end
u = ComponentArray(X = [1.0], V = [0.0])
p = ComponentArray(k = 1.0)
f = not_decapode_f()
data_prob = ODEProblem{true, SciMLBase.FullSpecialize}(f, u, (0, 20),p)
sol = solve(data_prob,Tsit5(), saveat = 0.1)
dat = vcat([u.X for u in sol.u]...)
function fake_loss(p)
prob = remake(data_prob, p = p)
soln = solve(prob, Tsit5(), sensealg = EnzymeVJP())
@info("Done")
# return soln(tₑ)
sum(last(soln)) # last, not soln(tₑ) because to avoid interpolation fails when AD fails.
end
Zygote.gradient(fake_loss,p) |
function fake_loss(p)
prob = remake(data_prob, p = p)
soln = solve(prob, Tsit5(), sensealg = QuadratureAdjoint(autojacvec=EnzymeVJP()))
@info("Done")
# return soln(tₑ)
sum(last(soln)) # last, not soln(tₑ) because to avoid interpolation fails when AD fails.
end |
So I read the SciMLSensitivity docs wrong... oops. It does change what happens so I thought it was valid. Anyway, changing to It's the exact same function that can be differentiated through with Enzyme on it's own.
|
See if |
That warning, in the context of a broadcast as is here, isn't an issue of type inference failure, but an inability to ascertain aliasing info, where one of the arguments is non-differentiable |
So then my understanding is that there's nothing on our end to improve that here, and it just needs an Enzyme improvement? Or should we try something like an in-place map! instead? |
not_decapode_f = begin
function simulate()
begin
var"__•1" = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
__V̇ = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
end
let var"__•1"=var"__•1", __V̇=__V̇
f(du, u, p, t) = begin
begin
X = u.X
k = p.k
V = u.V
end
var"•1" = PreallocationTools.get_tmp(var"__•1", u)
V̇ = PreallocationTools.get_tmp(__V̇, u)
var"•1" .= (.-)(k)
map!((x,y)->x*y,V̇,var"•1",X)
getproperty(du, :X) .= V
getproperty(du, :V) .= V̇
nothing
end
end
end
end @jClugstor check if using a |
with not_decapode_f = begin
function simulate()
begin
var"__•1" = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
__V̇ = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
end
let var"__•1"=var"__•1", __V̇=__V̇
f(du, u, p, t) = begin
begin
X = u.X
k = p.k
V = u.V
end
var"•1" = PreallocationTools.get_tmp(var"__•1", u)
V̇ = PreallocationTools.get_tmp(__V̇, u)
#var"•1" .= (.-)(k)
map!(x -> .-x, var"•1",k)
#V̇ .= var"•1" .* X
map!((x,y)->x*y,V̇,var"•1",X)
getproperty(du, :X) .= V
getproperty(du, :V) .= V̇
nothing
end
end
end
end gets the same issue
|
|
Doesn't |
Oh my bad, for some reason I thought it was |
Tried Enzyme master out, still an error. using Enzyme, PreallocationTools, ComponentArrays, Zygote, OrdinaryDiffEq, SciMLSensitivity
Enzyme.API.runtimeActivity!(true)
not_decapode_f = begin
function simulate()
begin
var"__•1" = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
__V̇ = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
end
let var"__•1"=var"__•1", __V̇=__V̇
f(du, u, p, t) = begin
begin
X = u.X
k = p.k
V = u.V
end
var"•1" = PreallocationTools.get_tmp(var"__•1", u)
V̇ = PreallocationTools.get_tmp(__V̇, u)
#var"•1" .= (.-)(k)
map!(x -> .-x, var"•1",k)
#V̇ .= var"•1" .* X
map!((x,y)->x*y,V̇,var"•1",X)
getproperty(du, :X) .= V
getproperty(du, :V) .= V̇
nothing
end
end
end
end
u = ComponentArray(X = [1.0], V = [0.0])
p = ComponentArray(k = 1.0)
du = ComponentArray(X = [0.0], V = [0.0])
u = ComponentArray(X = [1.0], V = [3.0])
p = ComponentArray(k = [1.0])
f = not_decapode_f()
f(du,u,p,1.0)
f(du,u,p,1.0)
data_prob = ODEProblem{true, SciMLBase.FullSpecialize}(f, u, (0, 20),p)
sol = solve(data_prob,Tsit5(), saveat = 0.1)
dat = vcat([u.X for u in sol.u]...)
function fake_loss(p)
prob = remake(data_prob, p = p)
soln = solve(prob, Tsit5(), sensealg = QuadratureAdjoint(autojacvec=EnzymeVJP()))
@info("Done")
# return soln(tₑ)
sum(last(soln)) # last, not soln(tₑ) because to avoid interpolation fails when AD fails.
end
Enzyme.gradient(Reverse, fake_loss, p)
The error points to the call to @add_kwonly function ODEProblem{iip}(f::AbstractODEFunction{iip},
u0, tspan, p = NullParameters(),
problem_type = StandardODEProblem();
kwargs...) where {iip}
_u0 = prepare_initial_state(u0)
_tspan = promote_tspan(tspan)
warn_paramtype(p)
new{typeof(_u0), typeof(_tspan),
isinplace(f), typeof(p), typeof(f),
typeof(kwargs),
typeof(problem_type)}(f,
_u0,
_tspan,
p,
kwargs,
problem_type)
end
|
No, please just do this one at a time. Just Enzyme in the interior, on the vjp. |
Oh yeah sorry, I should have Zygote there using Enzyme, PreallocationTools, ComponentArrays, Zygote, OrdinaryDiffEq, SciMLSensitivity
Enzyme.API.runtimeActivity!(false)
not_decapode_f = begin
function simulate()
begin
var"__•1" = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
__V̇ = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
end
let var"__•1"=var"__•1", __V̇=__V̇
f(du, u, p, t) = begin
begin
X = u.X
k = p.k
V = u.V
end
var"•1" = PreallocationTools.get_tmp(var"__•1", u)
V̇ = PreallocationTools.get_tmp(__V̇, u)
#var"•1" .= (.-)(k)
map!(x -> .-x, var"•1",k)
#V̇ .= var"•1" .* X
map!((x,y)->x*y,V̇,var"•1",X)
getproperty(du, :X) .= V
getproperty(du, :V) .= V̇
nothing
end
end
end
end
u = ComponentArray(X = [1.0], V = [0.0])
p = ComponentArray(k = 1.0)
du = ComponentArray(X = [0.0], V = [0.0])
u = ComponentArray(X = [1.0], V = [3.0])
p = ComponentArray(k = [1.0])
f = not_decapode_f()
f(du,u,p,1.0)
data_prob = ODEProblem{true, SciMLBase.FullSpecialize}(f, u, (0, 20),p)
sol = solve(data_prob,Tsit5(), saveat = 0.1)
dat = vcat([u.X for u in sol.u]...)
function fake_loss(p)
prob = remake(data_prob, p = p)
soln = solve(prob, Tsit5(), sensealg = QuadratureAdjoint(autojacvec=EnzymeVJP()))
@info("Done")
# return soln(tₑ)
sum(last(soln)) # last, not soln(tₑ) because to avoid interpolation fails when AD fails.
end
Zygote.gradient(fake_loss,p)
It's pointing to the line |
yeah Enzyme (reasonably) thinks that those two could alias with each other. |
Sorry, what two things does it think could alias? |
I tried replacing the #getproperty(du, :X) .= V
#getproperty(du, :V) .= V̇
setproperty!(du,:X, V)
setproperty!(du,:V, V̇) And it still errors with mismatched activity: ERROR: Enzyme execution failed.
Mismatched activity for: %unbox27.fca.1.0.1.insert.pn.extract.0 = phi {} addrspace(10)* [ %unbox27.fca.0.load, %L29 ], [ %getfield4, %L31 ] const val: %getfield4 = load atomic {} addrspace(10)*, {} addrspace(10)* addrspace(11)* %getfield_addr3 unordered, align 8, !dbg !114, !tbaa !25, !alias.scope !33, !noalias !36, !nonnull !19, !dereferenceable !41, !align !42
value=Unknown object of type Vector{Float64}
llvalue= %getfield4 = load atomic {} addrspace(10)*, {} addrspace(10)* addrspace(11)* %getfield_addr3 unordered, align 8, !dbg !114, !tbaa !25, !alias.scope !33, !noalias !36, !nonnull !19, !dereferenceable !41, !align !42
You may be using a constant variable as temporary storage for active memory (https://enzyme.mit.edu/julia/stable/faq/#Activity-of-temporary-storage). If not, please open an issue, and either rewrite this variable to not be conditionally active or use Enzyme.API.runtimeActivity!(true) as a workaround for now
Stacktrace:
[1] setindex!
@ ./array.jl:1021
[2] setindex!
@ ./array.jl:1041
[3] macro expansion
@ ~/.julia/packages/ComponentArrays/joxVV/src/array_interface.jl:0
[4] _setindex!
@ ~/.julia/packages/ComponentArrays/joxVV/src/array_interface.jl:131
Stacktrace:
[1] throwerr(cstr::Cstring)
@ Enzyme.Compiler ~/.julia/packages/Enzyme/YQwVA/src/compiler.jl:1620
[2] checkbounds
@ ./abstractarray.jl:702 [inlined]
[3] getindex
@ ./subarray.jl:322 [inlined]
[4] setindex!
@ ./array.jl:1040 [inlined]
[5] macro expansion
@ ~/.julia/packages/ComponentArrays/joxVV/src/array_interface.jl:0 [inlined]
[6] _setindex!
@ ~/.julia/packages/ComponentArrays/joxVV/src/array_interface.jl:131
[7] setproperty!
@ ~/.julia/packages/ComponentArrays/joxVV/src/namedtuple_interface.jl:17 [inlined]
[8] f
@ ~/Documents/Work/dev/DecapodesAutoDiff/EnzymeFix/EnzymeFixMaps.jl:26
[9] ODEFunction
@ ~/.julia/packages/SciMLBase/sakPO/src/scimlfunctions.jl:2296 [inlined]
[10] #224
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:222 [inlined]
[11] diffejulia__224_11092_inner_1wrap
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:0
[12] macro expansion
@ ~/.julia/packages/Enzyme/YQwVA/src/compiler.jl:6606 [inlined]
[13] enzyme_call
@ ~/.julia/packages/Enzyme/YQwVA/src/compiler.jl:6207 [inlined]
[14] CombinedAdjointThunk
@ ~/.julia/packages/Enzyme/YQwVA/src/compiler.jl:6084 [inlined]
[15] autodiff
@ ~/.julia/packages/Enzyme/YQwVA/src/Enzyme.jl:309 [inlined]
[16] vec_pjac!(out::ComponentVector{…}, λ::Vector{…}, y::ComponentVector{…}, t::Float64, S::AdjointSensitivityIntegrand{…})
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:297
[17] AdjointSensitivityIntegrand
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:316 [inlined]
[18] (::AdjointSensitivityIntegrand{…})(t::Float64)
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:328
[19] evalrule(f::AdjointSensitivityIntegrand{…}, a::Float64, b::Float64, x::Vector{…}, w::Vector{…}, gw::Vector{…}, nrm::typeof(LinearAlgebra.norm))
@ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/evalrule.jl:0
[20] #6
@ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:15 [inlined]
[21] ntuple
@ ./ntuple.jl:48 [inlined]
[22] do_quadgk(f::AdjointSensitivityIntegrand{…}, s::Tuple{…}, n::Int64, atol::Float64, rtol::Float64, maxevals::Int64, nrm::typeof(LinearAlgebra.norm), segbuf::Nothing)
@ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:13
[23] #50
@ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:253 [inlined]
[24] handle_infinities(workfunc::QuadGK.var"#50#51"{…}, f::AdjointSensitivityIntegrand{…}, s::Tuple{…})
@ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:145
[25] quadgk(::AdjointSensitivityIntegrand{…}, ::Float64, ::Vararg{…}; atol::Float64, rtol::Float64, maxevals::Int64, order::Int64, norm::Function, segbuf::Nothing)
@ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:252
[26] _adjoint_sensitivities(sol::ODESolution{…}, sensealg::QuadratureAdjoint{…}, alg::Tsit5{…}; t::Vector{…}, dgdu_discrete::Function, dgdp_discrete::Nothing, dgdu_continuous::Nothing, dgdp_continuous::Nothing, g::Nothing, abstol::Float64, reltol::Float64, callback::Nothing, kwargs::@Kwargs{…})
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:389
[27] _adjoint_sensitivities
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:331 [inlined]
[28] #adjoint_sensitivities#63
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/sensitivity_interface.jl:401 [inlined]
[29] (::SciMLSensitivity.var"#adjoint_sensitivity_backpass#308"{…})(Δ::ODESolution{…})
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/4YtYh/src/concrete_solve.jl:625
[30] ZBack
@ ~/.julia/packages/Zygote/nsBv0/src/compiler/chainrules.jl:211 [inlined]
[31] (::Zygote.var"#291#292"{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/lib/lib.jl:206
[32] (::Zygote.var"#2169#back#293"{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/ZygoteRules/M4xmc/src/adjoint.jl:72
[33] #solve#51
@ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1003 [inlined]
[34] (::Zygote.Pullback{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface2.jl:0
[35] #291
@ ~/.julia/packages/Zygote/nsBv0/src/lib/lib.jl:206 [inlined]
[36] (::Zygote.var"#2169#back#293"{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/ZygoteRules/M4xmc/src/adjoint.jl:72
[37] solve
@ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:993 [inlined]
[38] (::Zygote.Pullback{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface2.jl:0
[39] fake_loss
@ ~/Documents/Work/dev/DecapodesAutoDiff/EnzymeFix/EnzymeFixMaps.jl:52 [inlined]
[40] (::Zygote.Pullback{Tuple{typeof(fake_loss), ComponentVector{Float64, Vector{…}, Tuple{…}}}, Any})(Δ::Float64)
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface2.jl:0
[41] (::Zygote.var"#75#76"{Zygote.Pullback{Tuple{typeof(fake_loss), ComponentVector{…}}, Any}})(Δ::Float64)
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface.jl:91
[42] gradient(f::Function, args::ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(k = 1:1,)}}})
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface.jl:148
[43] top-level scope
@ ~/Documents/Work/dev/DecapodesAutoDiff/EnzymeFix/EnzymeFixMaps.jl:59
Some type information was truncated. Use `show(err)` to see complete types. Could this be because of something in ComponentArrays? |
Change those broadcasts to |
Still gives mismatched activity not_decapode_f = begin
function simulate()
begin
var"__•1" = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
__V̇ = PreallocationTools.FixedSizeDiffCache(Vector{Float64}(undef, 1))
end
let var"__•1"=var"__•1", __V̇=__V̇
f(du, u, p, t) = begin
begin
X = u.X
k = p.k
V = u.V
end
var"•1" = PreallocationTools.get_tmp(var"__•1", u)
V̇ = PreallocationTools.get_tmp(__V̇, u)
#var"•1" .= (.-)(k)
map!(x -> .-x, var"•1",k)
#V̇ .= var"•1" .* X
map!((x,y)->x*y,V̇,var"•1",X)
copyto!(getproperty(du, :X),V)
copyto!(getproperty(du, :V),V̇)
#setproperty!(du,:X, V)
#setproperty!(du,:V, V̇)
nothing
end
end
end
end ERROR: Enzyme execution failed.
Mismatched activity for: %.pn214 = phi {} addrspace(10)* [ %getfield, %L325 ], [ %43, %L332 ] const val: %getfield = load atomic {} addrspace(10)*, {} addrspace(10)* addrspace(11)* %getfield_addr unordered, align 8, !dbg !25, !tbaa !22, !alias.scope !37, !noalias !40, !nonnull !17, !dereferenceable !45, !align !46
value=Unknown object of type Vector{Float64}
llvalue= %getfield = load atomic {} addrspace(10)*, {} addrspace(10)* addrspace(11)* %getfield_addr unordered, align 8, !dbg !25, !tbaa !22, !alias.scope !37, !noalias !40, !nonnull !17, !dereferenceable !45, !align !46
You may be using a constant variable as temporary storage for active memory (https://enzyme.mit.edu/julia/stable/faq/#Activity-of-temporary-storage). If not, please open an issue, and either rewrite this variable to not be conditionally active or use Enzyme.API.runtimeActivity!(true) as a workaround for now
Stacktrace:
[1] copyto!
@ ./abstractarray.jl:1068
[2] f
@ ~/Documents/Work/dev/DecapodesAutoDiff/EnzymeFix/EnzymeFixMaps.jl:24
Stacktrace:
[1] throwerr(cstr::Cstring)
@ Enzyme.Compiler ~/.julia/packages/Enzyme/YQwVA/src/compiler.jl:1620
[2] unalias
@ ./abstractarray.jl:1481 [inlined]
[3] copyto!
@ ./abstractarray.jl:1067 [inlined]
[4] f
@ ~/Documents/Work/dev/DecapodesAutoDiff/EnzymeFix/EnzymeFixMaps.jl:24
[5] ODEFunction
@ ~/.julia/packages/SciMLBase/sakPO/src/scimlfunctions.jl:2296 [inlined]
[6] #224
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:222 [inlined]
[7] diffejulia__224_8189_inner_1wrap
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:0
[8] macro expansion
@ ~/.julia/packages/Enzyme/YQwVA/src/compiler.jl:6606 [inlined]
[9] enzyme_call
@ ~/.julia/packages/Enzyme/YQwVA/src/compiler.jl:6207 [inlined]
[10] CombinedAdjointThunk
@ ~/.julia/packages/Enzyme/YQwVA/src/compiler.jl:6084 [inlined]
[11] autodiff
@ ~/.julia/packages/Enzyme/YQwVA/src/Enzyme.jl:309 [inlined]
[12] vec_pjac!(out::ComponentVector{…}, λ::Vector{…}, y::ComponentVector{…}, t::Float64, S::AdjointSensitivityIntegrand{…})
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:297
[13] AdjointSensitivityIntegrand
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:316 [inlined]
[14] (::AdjointSensitivityIntegrand{…})(t::Float64)
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:328
[15] evalrule(f::AdjointSensitivityIntegrand{…}, a::Float64, b::Float64, x::Vector{…}, w::Vector{…}, gw::Vector{…}, nrm::typeof(LinearAlgebra.norm))
@ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/evalrule.jl:0
[16] #6
@ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:15 [inlined]
[17] ntuple
@ ./ntuple.jl:48 [inlined]
[18] do_quadgk(f::AdjointSensitivityIntegrand{…}, s::Tuple{…}, n::Int64, atol::Float64, rtol::Float64, maxevals::Int64, nrm::typeof(LinearAlgebra.norm), segbuf::Nothing)
@ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:13
[19] #50
@ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:253 [inlined]
[20] handle_infinities(workfunc::QuadGK.var"#50#51"{…}, f::AdjointSensitivityIntegrand{…}, s::Tuple{…})
@ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:145
[21] quadgk(::AdjointSensitivityIntegrand{…}, ::Float64, ::Vararg{…}; atol::Float64, rtol::Float64, maxevals::Int64, order::Int64, norm::Function, segbuf::Nothing)
@ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:252
[22] _adjoint_sensitivities(sol::ODESolution{…}, sensealg::QuadratureAdjoint{…}, alg::Tsit5{…}; t::Vector{…}, dgdu_discrete::Function, dgdp_discrete::Nothing, dgdu_continuous::Nothing, dgdp_continuous::Nothing, g::Nothing, abstol::Float64, reltol::Float64, callback::Nothing, kwargs::@Kwargs{…})
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:389
[23] _adjoint_sensitivities
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:331 [inlined]
[24] #adjoint_sensitivities#63
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/sensitivity_interface.jl:401 [inlined]
[25] (::SciMLSensitivity.var"#adjoint_sensitivity_backpass#308"{…})(Δ::ODESolution{…})
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/4YtYh/src/concrete_solve.jl:625
[26] ZBack
@ ~/.julia/packages/Zygote/nsBv0/src/compiler/chainrules.jl:211 [inlined]
[27] (::Zygote.var"#291#292"{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/lib/lib.jl:206
[28] (::Zygote.var"#2169#back#293"{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/ZygoteRules/M4xmc/src/adjoint.jl:72
[29] #solve#51
@ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1003 [inlined]
[30] (::Zygote.Pullback{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface2.jl:0
[31] #291
@ ~/.julia/packages/Zygote/nsBv0/src/lib/lib.jl:206 [inlined]
[32] (::Zygote.var"#2169#back#293"{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/ZygoteRules/M4xmc/src/adjoint.jl:72
[33] solve
@ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:993 [inlined]
[34] (::Zygote.Pullback{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface2.jl:0
[35] fake_loss
@ ~/Documents/Work/dev/DecapodesAutoDiff/EnzymeFix/EnzymeFixMaps.jl:52 [inlined]
[36] (::Zygote.Pullback{Tuple{typeof(fake_loss), ComponentVector{Float64, Vector{…}, Tuple{…}}}, Any})(Δ::Float64)
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface2.jl:0
[37] (::Zygote.var"#75#76"{Zygote.Pullback{Tuple{typeof(fake_loss), ComponentVector{…}}, Any}})(Δ::Float64)
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface.jl:91
[38] gradient(f::Function, args::ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(k = 1:1,)}}})
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface.jl:148
[39] top-level scope
@ ~/Documents/Work/dev/DecapodesAutoDiff/EnzymeFix/EnzymeFixMaps.jl:59
Some type information was truncated. Use `show(err)` to see complete types.
|
@wsmoses I don't quite understand this one. |
Hm is this not an error saying to use runtime activity ? Due to some aliasing issue in the copy? |
None of the arrays can be aliasing though, they are all given as different buffers. |
Sure, but Julia doesn't know that |
In particular in line 24 of your snippet it fails to statically prove this check won't be needed: https://github.com/JuliaLang/julia/blob/4954197196d657d14edd3e9c61ac101866e6fa25/base/abstractarray.jl#L1067 |
@ChrisRackauckas not sure on the semantics of preallocationtools, but yo may be able to give it an Enzyme.EnzymeRules.noalias attribute [to specify that the return of PreallocationTools.get_tmp will never alias with any other memory buffer |
There's no docs on the output and no test that uses it? I presume it outputs a bool? @jClugstor can you try adding to the script: EnzymeCore.EnzymeRules.noalias(::Typeof(PreallocationTools.get_tmp), args...) = true |
Yeah it’s super experimental rn but yeah. It’s used in cuda.jl rn if you want to take a look (who is its only user presently). I am worried about the semantic mismatch here tho so I want to understand the guarantees provided by preallocationtools |
https://github.com/SciML/PreallocationTools.jl/blob/master/src/PreallocationTools.jl#L13 It always builds its own cache buffers and then reinterprets those as needed for different types. So the cache construction ensures that the only way to get the same memory would be to |
Sorry, been travelling. ERROR: Enzyme execution failed.
Mismatched activity for: %.pn214 = phi {} addrspace(10)* [ %getfield, %L325 ], [ %43, %L332 ] const val: %getfield = load atomic {} addrspace(10)*, {} addrspace(10)* addrspace(11)* %getfield_addr unordered, align 8, !dbg !25, !tbaa !22, !alias.scope !37, !noalias !40, !nonnull !17, !dereferenceable !45, !align !46
value=Unknown object of type Vector{Float64}
llvalue= %getfield = load atomic {} addrspace(10)*, {} addrspace(10)* addrspace(11)* %getfield_addr unordered, align 8, !dbg !25, !tbaa !22, !alias.scope !37, !noalias !40, !nonnull !17, !dereferenceable !45, !align !46
You may be using a constant variable as temporary storage for active memory (https://enzyme.mit.edu/julia/stable/faq/#Activity-of-temporary-storage). If not, please open an issue, and either rewrite this variable to not be conditionally active or use Enzyme.API.runtimeActivity!(true) as a workaround for now
Stacktrace:
[1] copyto!
@ ./abstractarray.jl:1068
[2] f
@ ~/Documents/Work/dev/DecapodesAutoDiff/EnzymeFix/EnzymeFixMaps.jl:25
Stacktrace:
[1] throwerr(cstr::Cstring)
@ Enzyme.Compiler ~/.julia/packages/Enzyme/YDcYf/src/compiler.jl:1688
[2] unalias
@ ./abstractarray.jl:1481 [inlined]
[3] copyto!
@ ./abstractarray.jl:1067 [inlined]
[4] f
@ ~/Documents/Work/dev/DecapodesAutoDiff/EnzymeFix/EnzymeFixMaps.jl:25
[5] ODEFunction
@ ~/.julia/packages/SciMLBase/sakPO/src/scimlfunctions.jl:2296 [inlined]
[6] #224
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:222 [inlined]
[7] diffejulia__224_13409_inner_1wrap
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:0
[8] macro expansion
@ ~/.julia/packages/Enzyme/YDcYf/src/compiler.jl:6658 [inlined]
[9] enzyme_call
@ ~/.julia/packages/Enzyme/YDcYf/src/compiler.jl:6258 [inlined]
[10] CombinedAdjointThunk
@ ~/.julia/packages/Enzyme/YDcYf/src/compiler.jl:6135 [inlined]
[11] autodiff
@ ~/.julia/packages/Enzyme/YDcYf/src/Enzyme.jl:314 [inlined]
[12] vec_pjac!(out::ComponentVector{…}, λ::Vector{…}, y::ComponentVector{…}, t::Float64, S::AdjointSensitivityIntegrand{…})
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:297
[13] AdjointSensitivityIntegrand
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:316 [inlined]
[14] (::AdjointSensitivityIntegrand{…})(t::Float64)
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:328
[15] evalrule(f::AdjointSensitivityIntegrand{…}, a::Float64, b::Float64, x::Vector{…}, w::Vector{…}, gw::Vector{…}, nrm::typeof(LinearAlgebra.norm))
@ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/evalrule.jl:0
[16] #6
@ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:15 [inlined]
[17] ntuple
@ ./ntuple.jl:48 [inlined]
[18] do_quadgk(f::AdjointSensitivityIntegrand{…}, s::Tuple{…}, n::Int64, atol::Float64, rtol::Float64, maxevals::Int64, nrm::typeof(LinearAlgebra.norm), segbuf::Nothing)
@ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:13
[19] #50
@ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:253 [inlined]
[20] handle_infinities(workfunc::QuadGK.var"#50#51"{…}, f::AdjointSensitivityIntegrand{…}, s::Tuple{…})
@ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:145
[21] quadgk(::AdjointSensitivityIntegrand{…}, ::Float64, ::Vararg{…}; atol::Float64, rtol::Float64, maxevals::Int64, order::Int64, norm::Function, segbuf::Nothing)
@ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:252
[22] _adjoint_sensitivities(sol::ODESolution{…}, sensealg::QuadratureAdjoint{…}, alg::Tsit5{…}; t::Vector{…}, dgdu_discrete::Function, dgdp_discrete::Nothing, dgdu_continuous::Nothing, dgdp_continuous::Nothing, g::Nothing, abstol::Float64, reltol::Float64, callback::Nothing, kwargs::@Kwargs{…})
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:389
[23] _adjoint_sensitivities
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/quadrature_adjoint.jl:331 [inlined]
[24] #adjoint_sensitivities#63
@ ~/.julia/packages/SciMLSensitivity/4YtYh/src/sensitivity_interface.jl:401 [inlined]
[25] (::SciMLSensitivity.var"#adjoint_sensitivity_backpass#308"{…})(Δ::ODESolution{…})
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/4YtYh/src/concrete_solve.jl:625
[26] ZBack
@ ~/.julia/packages/Zygote/nsBv0/src/compiler/chainrules.jl:211 [inlined]
[27] (::Zygote.var"#291#292"{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/lib/lib.jl:206
[28] (::Zygote.var"#2169#back#293"{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/ZygoteRules/M4xmc/src/adjoint.jl:72
[29] #solve#51
@ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1003 [inlined]
[30] (::Zygote.Pullback{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface2.jl:0
[31] #291
@ ~/.julia/packages/Zygote/nsBv0/src/lib/lib.jl:206 [inlined]
[32] (::Zygote.var"#2169#back#293"{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/ZygoteRules/M4xmc/src/adjoint.jl:72
[33] solve
@ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:993 [inlined]
[34] (::Zygote.Pullback{…})(Δ::ODESolution{…})
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface2.jl:0
[35] fake_loss
@ ~/Documents/Work/dev/DecapodesAutoDiff/EnzymeFix/EnzymeFixMaps.jl:53 [inlined]
[36] (::Zygote.Pullback{Tuple{typeof(fake_loss), ComponentVector{Float64, Vector{…}, Tuple{…}}}, Any})(Δ::Float64)
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface2.jl:0
[37] (::Zygote.var"#75#76"{Zygote.Pullback{Tuple{typeof(fake_loss), ComponentVector{…}}, Any}})(Δ::Float64)
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface.jl:91
[38] gradient(f::Function, args::ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(k = 1:1,)}}})
@ Zygote ~/.julia/packages/Zygote/nsBv0/src/compiler/interface.jl:148
[39] top-level scope
@ ~/Documents/Work/dev/DecapodesAutoDiff/EnzymeFix/EnzymeFixMaps.jl:60
Some type information was truncated. Use `show(err)` to see complete types. |
It looks to me like theres only mention of function EnzymeCore.EnzymeRules.noalias(::Type{CT}, ::UndefInitializer, args...) where {CT <: CuArray}
return nothing
end Does this do anything? Just so I'm understanding, is this a matter of developing |
By the way, this doesn't seem to be specific to DiffCaches, here's an example using just ComponentArrays. using ComponentArrays, OrdinaryDiffEq
x0 = [1.0]
v0 = [0.0]
u₀ = ComponentArray(X=x0, V=v0)
p = ComponentArray(k=1.0)
tₑ = 10
f(du, u, p, t) = begin
X = u.X
k = p.k
V = u.V
var"•1" = -k
V̇ = var"•1" .* X
getproperty(du, :X) .= V
getproperty(du, :V) .= V̇
return nothing
end
data_prob = ODEProblem{true,SciMLBase.FullSpecialize}(f, u₀, (0, tₑ), p)
solve(prob, Tsit5())
using SciMLSensitivity, Zygote, Enzyme
function fake_loss(par)
prob = remake(data_prob, p=par)
soln = solve(prob, Tsit5(), sensealg = InterpolatingAdjoint())
@info("Done")
# return soln(tₑ)
sum(last(soln)) # last, not soln(tₑ) because to avoid interpolation fails when AD fails.
end
p = ComponentArray(k=1.0)
Zygote.gradient(fake_loss, p)
Enzyme.gradient(Enzyme.Reverse, fake_loss, p) The Zygote call works (but only with certain ERROR: Mismatched activity for: store i64 %13, i64 addrspace(11)* %12, align 8, !dbg !44 const val: %13 = load i64, i64 addrspace(11)* %11, align 8, !dbg !44
Type tree: {[-1]:Pointer, [-1,0]:Pointer, [-1,0,-1]:Float@double, [-1,8]:Integer, [-1,9]:Integer, [-1,10]:Integer, [-1,11]:Integer, [-1,12]:Integer, [-1,13]:Integer, [-1,14]:Integer, [-1,15]:Integer, [-1,16]:Integer, [-1,17]:Integer, [-1,18]:Integer, [-1,19]:Integer, [-1,20]:Integer, [-1,21]:Integer, [-1,22]:Integer, [-1,23]:Integer, [-1,24]:Integer, [-1,25]:Integer, [-1,26]:Integer, [-1,27]:Integer, [-1,28]:Integer, [-1,29]:Integer, [-1,30]:Integer, [-1,31]:Integer, [-1,32]:Integer, [-1,33]:Integer, [-1,34]:Integer, [-1,35]:Integer, [-1,36]:Integer, [-1,37]:Integer, [-1,38]:Integer, [-1,39]:Integer}
llvalue= %13 = load i64, i64 addrspace(11)* %11, align 8, !dbg !44
You may be using a constant variable as temporary storage for active memory (https://enzyme.mit.edu/julia/stable/faq/#Activity-of-temporary-storage). If not, please open an issue, and either rewrite this variable to not be conditionally active or use Enzyme.API.runtimeActivity!(true) as a workaround for now
Stacktrace:
[1] _
@ ~/.julia/packages/SciMLBase/vhP5T/src/problems/ode_problems.jl:118
[2] ODEProblem
@ ~/.julia/packages/SciMLBase/vhP5T/src/problems/ode_problems.jl:111
[3] #remake#678
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:160
[4] remake
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:89
[5] remake
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:0
Stacktrace:
[1] _
@ ~/.julia/packages/SciMLBase/vhP5T/src/problems/ode_problems.jl:118 [inlined]
[2] ODEProblem
@ ~/.julia/packages/SciMLBase/vhP5T/src/problems/ode_problems.jl:111 [inlined]
[3] #remake#678
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:160 [inlined]
[4] remake
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:89 [inlined]
[5] remake
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:0 [inlined]
[6] augmented_julia_remake_17620_inner_1wrap
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:0
[7] macro expansion
@ ~/.julia/packages/Enzyme/OOd6p/src/compiler.jl:7049 [inlined]
[8] enzyme_call(::Val{…}, ::Ptr{…}, ::Type{…}, ::Val{…}, ::Val{…}, ::Type{…}, ::Type{…}, ::Const{…}, ::Type{…}, ::Duplicated{…}, ::Const{…}, ::Const{…})
@ Enzyme.Compiler ~/.julia/packages/Enzyme/OOd6p/src/compiler.jl:6658
[9] (::Enzyme.Compiler.AugmentedForwardThunk{…})(::Const{…}, ::Duplicated{…}, ::Vararg{…})
@ Enzyme.Compiler ~/.julia/packages/Enzyme/OOd6p/src/compiler.jl:6546
[10] runtime_generic_augfwd(activity::Type{…}, width::Val{…}, ModifiedBetween::Val{…}, RT::Val{…}, f::typeof(Core.kwcall), df::Nothing, primal_1::@NamedTuple{…}, shadow_1_1::@NamedTuple{…}, primal_2::typeof(remake), shadow_2_1::Nothing, primal_3::ODEProblem{…}, shadow_3_1::Nothing)
@ Enzyme.Compiler ~/.julia/packages/Enzyme/OOd6p/src/rules/jitrules.jl:338
[11] fake_loss
@ ~/Documents/Work/dev/DecapodeCalibrateDemos/HarmonicOscillator/boiled_down.jl:36 [inlined]
[12] fake_loss
@ ~/Documents/Work/dev/DecapodeCalibrateDemos/HarmonicOscillator/boiled_down.jl:0 [inlined]
[13] augmented_julia_fake_loss_20634_inner_1wrap
@ ~/Documents/Work/dev/DecapodeCalibrateDemos/HarmonicOscillator/boiled_down.jl:0
[14] macro expansion
@ ~/.julia/packages/Enzyme/OOd6p/src/compiler.jl:7049 [inlined]
[15] enzyme_call
@ ~/.julia/packages/Enzyme/OOd6p/src/compiler.jl:6658 [inlined]
[16] AugmentedForwardThunk
@ ~/.julia/packages/Enzyme/OOd6p/src/compiler.jl:6546 [inlined]
[17] autodiff
@ ~/.julia/packages/Enzyme/OOd6p/src/Enzyme.jl:264 [inlined]
[18] autodiff
@ ~/.julia/packages/Enzyme/OOd6p/src/Enzyme.jl:332 [inlined]
[19] gradient(rm::ReverseMode{…}, f::typeof(fake_loss), x::ComponentVector{…})
@ Enzyme ~/.julia/packages/Enzyme/OOd6p/src/Enzyme.jl:1049
[20] top-level scope
@ ~/Documents/Work/dev/DecapodeCalibrateDemos/HarmonicOscillator/boiled_down.jl:46
Some type information was truncated. Use `show(err)` to see complete types. |
What happens if you enable the option suggested in the error message (it
needs to be done right after first importing Enzyme)
…On Thu, Aug 15, 2024 at 2:54 PM Jadon Clugston ***@***.***> wrote:
By the way, this doesn't seem to be specific to DiffCaches, here's an
example using just ComponentArrays.
using ComponentArrays, OrdinaryDiffEq
x0 = [1.0]
v0 = [0.0]
u₀ = ComponentArray(X=x0, V=v0)
p = ComponentArray(k=1.0)
tₑ = 10
f(du, u, p, t) = begin
X = u.X
k = p.k
V = u.V
var"•1" = -k
V̇ = var"•1" .* X
getproperty(du, :X) .= V
getproperty(du, :V) .= V̇
return nothingend
data_prob = ODEProblem{true,SciMLBase.FullSpecialize}(f, u₀, (0, tₑ), p)solve(prob, Tsit5())
using SciMLSensitivity, Zygote, Enzyme
function fake_loss(par)
prob = remake(data_prob, p=par)
soln = solve(prob, Tsit5(), sensealg = InterpolatingAdjoint())
@info("Done")
# return soln(tₑ)
sum(last(soln)) # last, not soln(tₑ) because to avoid interpolation fails when AD fails.end
p = ComponentArray(k=1.0)
Zygote.gradient(fake_loss, p)
Enzyme.gradient(Enzyme.Reverse, fake_loss, p)
The Zygote call works (but only with certain sensealg options). Enzyme
gives a mismatched activity error.
ERROR: Mismatched activity for: store i64 %13, i64 addrspace(11)* %12, align 8, !dbg !44 const val: %13 = load i64, i64 addrspace(11)* %11, align 8, !dbg !44
Type tree: {[-1]:Pointer, [-1,0]:Pointer, ***@***.***, [-1,8]:Integer, [-1,9]:Integer, [-1,10]:Integer, [-1,11]:Integer, [-1,12]:Integer, [-1,13]:Integer, [-1,14]:Integer, [-1,15]:Integer, [-1,16]:Integer, [-1,17]:Integer, [-1,18]:Integer, [-1,19]:Integer, [-1,20]:Integer, [-1,21]:Integer, [-1,22]:Integer, [-1,23]:Integer, [-1,24]:Integer, [-1,25]:Integer, [-1,26]:Integer, [-1,27]:Integer, [-1,28]:Integer, [-1,29]:Integer, [-1,30]:Integer, [-1,31]:Integer, [-1,32]:Integer, [-1,33]:Integer, [-1,34]:Integer, [-1,35]:Integer, [-1,36]:Integer, [-1,37]:Integer, [-1,38]:Integer, [-1,39]:Integer}
llvalue= %13 = load i64, i64 addrspace(11)* %11, align 8, !dbg !44
You may be using a constant variable as temporary storage for active memory (https://enzyme.mit.edu/julia/stable/faq/#Activity-of-temporary-storage). If not, please open an issue, and either rewrite this variable to not be conditionally active or use Enzyme.API.runtimeActivity!(true) as a workaround for now
Stacktrace:
[1] _
@ ~/.julia/packages/SciMLBase/vhP5T/src/problems/ode_problems.jl:118
[2] ODEProblem
@ ~/.julia/packages/SciMLBase/vhP5T/src/problems/ode_problems.jl:111
[3] #remake#678
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:160
[4] remake
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:89
[5] remake
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:0
Stacktrace:
[1] _
@ ~/.julia/packages/SciMLBase/vhP5T/src/problems/ode_problems.jl:118 [inlined]
[2] ODEProblem
@ ~/.julia/packages/SciMLBase/vhP5T/src/problems/ode_problems.jl:111 [inlined]
[3] #remake#678
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:160 [inlined]
[4] remake
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:89 [inlined]
[5] remake
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:0 [inlined]
[6] augmented_julia_remake_17620_inner_1wrap
@ ~/.julia/packages/SciMLBase/vhP5T/src/remake.jl:0
[7] macro expansion
@ ~/.julia/packages/Enzyme/OOd6p/src/compiler.jl:7049 [inlined]
[8] enzyme_call(::Val{…}, ::Ptr{…}, ::Type{…}, ::Val{…}, ::Val{…}, ::Type{…}, ::Type{…}, ::Const{…}, ::Type{…}, ::Duplicated{…}, ::Const{…}, ::Const{…})
@ Enzyme.Compiler ~/.julia/packages/Enzyme/OOd6p/src/compiler.jl:6658
[9] (::Enzyme.Compiler.AugmentedForwardThunk{…})(::Const{…}, ::Duplicated{…}, ::Vararg{…})
@ Enzyme.Compiler ~/.julia/packages/Enzyme/OOd6p/src/compiler.jl:6546
[10] runtime_generic_augfwd(activity::Type{…}, width::Val{…}, ModifiedBetween::Val{…}, RT::Val{…}, f::typeof(Core.kwcall), df::Nothing, ***@***.***{…}, ***@***.***{…}, primal_2::typeof(remake), shadow_2_1::Nothing, primal_3::ODEProblem{…}, shadow_3_1::Nothing)
@ Enzyme.Compiler ~/.julia/packages/Enzyme/OOd6p/src/rules/jitrules.jl:338
[11] fake_loss
@ ~/Documents/Work/dev/DecapodeCalibrateDemos/HarmonicOscillator/boiled_down.jl:36 [inlined]
[12] fake_loss
@ ~/Documents/Work/dev/DecapodeCalibrateDemos/HarmonicOscillator/boiled_down.jl:0 [inlined]
[13] augmented_julia_fake_loss_20634_inner_1wrap
@ ~/Documents/Work/dev/DecapodeCalibrateDemos/HarmonicOscillator/boiled_down.jl:0
[14] macro expansion
@ ~/.julia/packages/Enzyme/OOd6p/src/compiler.jl:7049 [inlined]
[15] enzyme_call
@ ~/.julia/packages/Enzyme/OOd6p/src/compiler.jl:6658 [inlined]
[16] AugmentedForwardThunk
@ ~/.julia/packages/Enzyme/OOd6p/src/compiler.jl:6546 [inlined]
[17] autodiff
@ ~/.julia/packages/Enzyme/OOd6p/src/Enzyme.jl:264 [inlined]
[18] autodiff
@ ~/.julia/packages/Enzyme/OOd6p/src/Enzyme.jl:332 [inlined]
[19] gradient(rm::ReverseMode{…}, f::typeof(fake_loss), x::ComponentVector{…})
@ Enzyme ~/.julia/packages/Enzyme/OOd6p/src/Enzyme.jl:1049
[20] top-level scope
@ ~/Documents/Work/dev/DecapodeCalibrateDemos/HarmonicOscillator/boiled_down.jl:46
Some type information was truncated. Use `show(err)` to see complete types.
—
Reply to this email directly, view it on GitHub
<#177 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJTUXENAPGD3AJCFHB724LZRT2MRAVCNFSM6AAAAABIR6QLPWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJRHE4TCNRZGY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Yeah, setting |
An effort to create a very simple Decapode, to help simplify diagnosing autodiff through decapode issues.
This is a 1-D harmonic oscillator Decapode:
Zygote.gradient(fake_loss,p)
errors withis the ForwardDiff coming from the solver adjoint?
Anyways, setting the sensealg:
soln = solve(prob, FBDF(autodiff = false),sensealg = SciMLSensitivity.ZygoteVJP)
and runningZygote.gradient(fake_loss,p)
gets a different error inside of Zygote:ERROR: ArgumentError: new: too few arguments (expected 49)
The text was updated successfully, but these errors were encountered: