You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I kept seeing the same version of this code for implicit Heun's method,
importnumpyasnpdefimplicit_heun_method(f, y0, t0, tn, h):
num_steps=int((tn-t0) /h) # Calculate number of stepst=np.linspace(t0, tn, num_steps+1) # Create time arrayy=np.zeros((num_steps+1, len(y0))) # Initialize solution arrayy[0] =y0# Set initial conditionforiinrange(num_steps):
k1=f(t[i], y[i])
y_guess=y[i] +h*k1# Initial guess for y_n+1tol=1e-8# Tolerance for the iterative solvermax_iter=100# Maximum number of iterationsforjinrange(max_iter):
y_pred=y[i] +0.5*h* (k1+f(t[i+1], y_guess)) # Predictor using Heun's methodifnp.linalg.norm(y_pred-y_guess) <tol:
breaky_guess=y_predy[i+1] =y_predreturnt, y
It seems like a chatGPT solution and makes some odd choices in how the implicit Heun method is defined. I'm not sure why it uses np.linalg.norm for the convergence criteria, nothing wrong with it just strange choice.
It also bakes in the whole integration process instead of breaking it into just a step-wise integration e.g. from the reading,
defheun_step(state,rhs,dt,etol=0.000001,maxiters=100):
'''Update a state to the next time increment using the implicit Heun's method. Arguments --------- state : array of dependent variables rhs : function that computes the RHS of the DiffEq dt : float, time increment etol : tolerance in error for each time step corrector maxiters: maximum number of iterations each time step can take Returns ------- next_state : array, updated after one time increment'''e=1eps=np.finfo('float64').epsnext_state=state+rhs(state)*dt################### New iterative correction #########################forninrange(0,maxiters):
next_state_old=next_statenext_state=state+ (rhs(state)+rhs(next_state))/2*dte=np.sum(np.abs(next_state-next_state_old)/np.abs(next_state+eps))
ife<etol:
break############### end of iterative correction #########################returnnext_state
The way integration methods are introduced here, you take one time step solution at a time. The chatGPT method is a little closer to conventional integration functions like scipy.integrate.solve_ivpthe preferred integration for ODEs in Python
Just thought I'd start a discussion on this topic. Nothing wrong with getting help from chatGPT, but you do need to reference where the code is coming from otherwise its tough to know where issues might crop up. Also, take some time to read/edit the code it gives you. In the heun_step we use np.sum(next_state-next_state_old)/np.abs(next_state+eps) as the error. This is a nice relative change in error rather than the norm of the difference that is not normalized so you need to vary the tolerance based upon each problem you encounter.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Greetings,
I kept seeing the same version of this code for implicit Heun's method,
It seems like a chatGPT solution and makes some odd choices in how the implicit Heun method is defined. I'm not sure why it uses
np.linalg.norm
for the convergence criteria, nothing wrong with it just strange choice.It also bakes in the whole integration process instead of breaking it into just a step-wise integration e.g. from the reading,
The way integration methods are introduced here, you take one time step solution at a time. The chatGPT method is a little closer to conventional integration functions like scipy.integrate.solve_ivp the preferred integration for ODEs in Python
Just thought I'd start a discussion on this topic. Nothing wrong with getting help from chatGPT, but you do need to reference where the code is coming from otherwise its tough to know where issues might crop up. Also, take some time to read/edit the code it gives you. In the
heun_step
we usenp.sum(next_state-next_state_old)/np.abs(next_state+eps)
as the error. This is a nice relative change in error rather than thenorm
of the difference that is not normalized so you need to vary the tolerance based upon each problem you encounter.Beta Was this translation helpful? Give feedback.
All reactions