Workflow to improve model granularity while limiting computational requirements #514
Unanswered
fvandebeek
asked this question in
Q&A
Replies: 1 comment
-
This seems like a reasonable use-case. The pre-resampled/clustered data is stored in memory (for better or worse). You can find it in import calliope
import xarray as xr
model = calliope.Model(...) # includes time resampling etc.
model.run() # mode is `plan`
new_model_data = xr.merge([model._model_data_original, model.results.drop_dims("timesteps")]) # maybe `xr.concat`?
new_model = calliope.Model(config=None, model_data=new_model_data)
new_model.run_config["mode"] = "operate"
new_model.run_config["operation"]["use_cap_results"] = True
new_model.run() |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
At the moment Im working with fairly heavy models that pushes the boundaries of the equipment Im using. I dont have easy access to clusters, so Im using graphical performance laptop (DELL with 32GB RAM and i7 processor) and a Gurobi solver, which for many cases is heavy enough. For a project though, Im working with ~80 technologies and ~80 locations and a sampling period of the input data (timeseries profiles) of 15min (meaning 4*8760 timesteps).
Now by applying time masks, the design of the energy system can be approached quite accurately without needing to run the simulation at high temporal resoluation for a full year. However, when analyzing the operation of the system, i.e. the flows and loads through the system, it is preferable to have results at uniform high resolution timesteps throughout the whole year.
Now I was thinking of a workflow where at first the system is planned at lower temporal resolutions using time masking, to reduce the computational burden. Then in a second step, the system could be rerun in operate mode, fixing the capacity related decision variables calculted in the first simulation, allowing for higher temporal resolution in the rerun at similar computational cost (of course, depending on your case).
I was wondering of anyone has tried this before, or has any tips on how to start implementing this? I have difficulty overseeing the effect of first asking the model to resample the input data, and then to ask the model to use the original timeseries data in the second step without fully reloading the model, as the calculated capacities would need to be stored for the second run.
Thanks for thinking along!
Beta Was this translation helpful? Give feedback.
All reactions