Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better interface for thermal models #57

Closed
sampotter opened this issue Jan 24, 2022 · 15 comments
Closed

Better interface for thermal models #57

sampotter opened this issue Jan 24, 2022 · 15 comments

Comments

@sampotter
Copy link
Owner

@steo85it @nschorgh

I think we should try to improve the interface to the time-dependent thermal models a bit.

We have a few different ideas at this point for how to do the time-dependent simulation (basically Norbert's "just do one MVP" approach vs solving the full radiosity system twice at each step). I think it's worth doing a direct comparison of these. But I'm getting lost in the details of how to set them up.

I would like to be able to write something like:

from flux.model import ThermalModel

thermal_model = ThermalModel(
    FF,
    times, # len(times) == N, times[i] = ith time point
    sun_positions, # sun_positions[0] = ith sun position corresponding to times[i],
    method='norbert', # should be == 'norbert' or 'sam', just for example :-)
    **physical_parameter_kwargs, # passed on to PccThermalModel1D
)

The variable thermal_model should then be a Python iterator or generator. That is, we should be able to do things like:

for T in thermal_model:
    # T is an nz x shape_model.num_faces ndarray of temperatures for each skin depth
    ... do things with T ...

I'm going to put together a rough draft of this hopefully today or tomorrow and then it would be helpful to get you guys' feedback.

@steo85it
Copy link
Collaborator

I like to use these kind of "config classes" which allow me to easy get/set the options on the fly during the run (there is a use example at the bottom of the file). Else, happy to check out your solution for this. :)
Also, it might be convenient to keep track of the locations to which elements in the T array are located. I currently use xarrays for that (here is an example), but the same can be set up with "basic" tools such as arrays of indexes and/or dictionaries.

@sampotter
Copy link
Owner Author

Oh interesting, I haven't heard of config classes before... Is this pattern documented somewhere?

Sure, it would be cool to be able to register the elements in the T array. Do you mean the (x, y, z) coordinates of each grid point for all the skin layers?


I added a new class with the start of this idea. See flux.model.ThermalModel. I also had to futz with the interface to PccThermalModel1d a bit. It should work as before but there are some more error checks. I always forgot how things should be specified for it.

Let me know what you think of this interface.

@steo85it
Copy link
Collaborator

steo85it commented Jan 25, 2022

I got it from some stackoverflow answer such as this one, but unfortunately I didn't mark down which one... (it appears as one proposed solution in what I linked, though). I'll test your interface now, but computing the FF for gerlache took me almost 4h (vs 20 min with Embree): that's a bit worse than I remembered and got me stuck for a while to add the rest.

Sure, it would be cool to be able to register the elements in the T array. Do you mean the (x, y, z) coordinates of each grid point for all the skin layers?

Yes, precisely. In the example I posted above, for example, I store all E(t,x,y,z) and T(z,t,x,y,z) to the same array (until you have a memory issue, that is), whose elements can then be compared, plotted, or easily dumped to a GTiff.

@steo85it
Copy link
Collaborator

steo85it commented Jan 25, 2022

I get this error message when using your latest commit with try_time_dependent_problem.py (I hope I won't need to re-compute the FF ^^)

Traceback (most recent call last):
  File "/home/sberton2/Lavoro/code/python-flux/examples/gerlache/try_time_dependent_problem.py", line 40, in <module>
    thermal_model = ThermalModel(
  File "/home/sberton2/Lavoro/code/python-flux/src/flux/model.py", line 173, in __init__
    self._pcc_thermal_model_1d = PccThermalModel1D(
  File "src/flux/thermal.pyx", line 78, in flux.thermal.PccThermalModel1D.__cinit__
TypeError: only size-1 arrays can be converted to Python scalars

@sampotter
Copy link
Owner Author

Yes, precisely. In the example I posted above, for example, I store all E(t,x,y,z) and T(z,t,x,y,z) to the same array (until you have a memory issue, that is), whose elements can then be compared, plotted, or easily dumped to a GTiff.

This seems like a great feature to have for people actually doing work, but doesn't seem necessary to prove out the method in Paper #1. Please feel free to add this if you like, but I am going to focus on completing the paper for the time being... Hard to stay focused otherwise!

I get this error message when using your latest commit with try_time_dependent_problem.py (I hope I won't need to re-compute the FF ^^)

Ugh, sorry. Yes, that's probably the problem. The "pickle" format is not a good long-term solution if the code is going to change. If the interfaces of the functions change, things can go off the rails. Apologies for any inconvenience caused.

The solution to this is to put together some kind of persistent file format for the compressed form factor matrix. Thankfully, the "real" contents of a compressed form factor matrix are not likely to change even if the API changes (it's just a tree of matrices + some associated data), so this shouldn't be too hard to do. I'll figure this out now.

@sampotter
Copy link
Owner Author

See #58.

@steo85it
Copy link
Collaborator

Ok, I'll maybe set up a smaller FF this time, if I want to actually get something done! ^^ And both problems are probably not relevant for the paper, as you say: good to keep them noted somewhere, but no hurry.

@sampotter
Copy link
Owner Author

Yeah actually I think you're right. This is an ambitious task for right now.

@steo85it
Copy link
Collaborator

steo85it commented Jan 26, 2022

I added a new class with the start of this idea. See flux.model.ThermalModel. I also had to futz with the interface to PccThermalModel1d a bit. It should work as before but there are some more error checks. I always forgot how things should be specified for it.

Let me know what you think of this interface.

Nice, but It's not very "backward-compatible" ... this change broke all of my scripts calling the PccThermalModel1d. Would you have some detail about how to easily adapt existing calls? E.g., how should I modify this call

# set up model
model = PccThermalModel1D(nfaces=spE.data.shape[0], z=z, T0=T0, ti=120., rhoc=960000.,  # ti=0., rhoc=0., #
                          emissivity=emiss, Fgeotherm=Fgeoth, Qprev=Qabs0.astype(np.double), bcond='Q')

to avoid getting

Traceback (most recent call last):
  File "/home/sberton2/Lavoro/code/plapp-flux/examples/directflux/run_direct.py", line 153, in <module>
    model = PccThermalModel1D(z=z, T0=T0, ti=120., rhoc=960000., emissivity=emiss, Fgeotherm=Fgeoth, Qprev=Qabs0.astype(np.double), Tsurfprev=Tsurfprev, bcond='Q')
  File "src/flux/thermal.pyx", line 89, in flux.thermal.PccThermalModel1D.__cinit__
TypeError: __cinit__() takes at least 8 positional arguments (4 given)

?

@sampotter
Copy link
Owner Author

PccThermalModel1D internally stores everything as either 1D arrays (with length nfaces or length nz), and is really a bundle of independent 1D thermal models. I wanted this to be more clear in the interface (well, I need to add some docs!), but I decided to require the caller to just pass all of the arguments as arrays themselves. Because of this, passing the nfaces argument is redundant, since it can just be determined from the arrays. I changed the arguments from keyword arguments to regular positional arguments (except bcond), since I really don't know if any of the parameters have sensible defaults, and it's probably better to just force the user to make sure they decide what they want .

So, for your example, you would first set up appropriately sized arrays with constant values, then call:

Q0 = Qabs0.astype(np.float64)
model = PccThermalModel1D(z, T0, ti, rhoc, emiss, Fgeotherm, Q0, Tsurfprev) # bcond='Q' by default

If you look at lines 108 through 156, you can see the assumptions about the array sizes for now. But I will write this up as a docstring for __cinit__ so it's more accessible.

Of course it would be helpful to have a way to construct PccThermalModel1D from constant values, like you're doing above. This should be done as a @staticmethod to provide a "factory function" for PccThermalModel1D. I'll do this once I get to work in an hour or so...

@steo85it
Copy link
Collaborator

steo85it commented Jan 26, 2022

Ah ok, then I'll convert everything to arrays ... it was a bit late to realize that yesterday. :)
(it works now, but yes, the staticmethod you mentioned would be quite convenient! Thanks!)

@sampotter
Copy link
Owner Author

Oops, I forgot I actually did make __cinit__ do some automatic promotion from constant values to arrays during initialization. Please give this a try. In __cinit__, take a look at the lines where the function promote_to_array_if_necessary is called.

@steo85it
Copy link
Collaborator

Yes, I actually copied those lines to my script and adapted them a bit, to make it work. I still got stuck somewhere when simply using your updated interface, I can check where and see if it's an easy fix.

@sampotter
Copy link
Owner Author

Feel free to send errors here and I will help debug

@steo85it
Copy link
Collaborator

I think we are all using your new class w/o trouble now. I'm closing this, but feel free to reopen if still needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants