-
Notifications
You must be signed in to change notification settings - Fork 39
WP1.2 Coordination meeting November 15, 2017
Meeting Report WP1.2 ‘Modelica library for MPC’
1 MEETING SUBJECT, DATE
Subject: WP1.2
Date: 15-11-2017
Location: Skype for Business
Minutes taken by: Javier Arroyo (KU Leuven)
2 PARTICIPANTS
Company/Organisation Participant
KU Leuven Lieve Helsen (WP Leader)
KU Leuven Iago Cupeiro
KU Leuven Javier Arroyo
KU Leuven Filip Jorissen (later)
SDU Kzrysztof Arendt
LBNL Michael Wetter
LBNL David Blum
ENGIE Lab Valentin Gavan
ENGIE – Axima (now at LBNL) Lisa Rivalin
3 AGENDA
- Get to know each other – short presentation (Who? Affiliation? Which interests? …)
- Agreement on tasks (Task 1.2.1, 1.2.2, 1.2.3) and human resources.
- If possible, main insights from enquiry results - Dynamic optimization in Modelica: Applications, recent developments and future challenges (Dave Blum) .
- More detailed discussion on tasks a. Task 1.2.1: which benchmarks, which KPIs? b. Task 1.2.2: fair comparisons starting with formulations/algorithms/data of different parties insights recommendations for best practices. c. Task 1.2.3: requirements for Modelica library for MPC, scalability.
- How to get started?
- Berlin meeting d. Call for free presentations (combine with working paper?) i. Presentation candidate-models for benchmark emulators (KU Leuven, ENGIE …) ii. Presentation current MPC formulations (who?) e. Discussion i. KPIs to be used to evaluate control performance and toolchain performance (with respect to engineering cost and data availability for controller design) ii. Level of model detail in emulators iii. Deterministic versus stochastic models a. Publication and dissemination plan
- Brainstorm about additional experts to be included, mainly optimization (e.g. Moritz Diehl (Freiburg) …)
- Wrap up
4 DISCUSSIONS
4.1 GET TO KNOW EACH OTHER – SHORT PRESENTATION
Lieve Introduces the meeting and WP2.1. She invites everybody to present themselves:
- Lieve Helsen: WP leader. Full professor at KU Leuven (BE) in the field of Thermal systems, working on MPC for some years now.
- Javier Arroyo: starting PhD student supervised by Lieve Helsen. Interested in benchmark and performance indicators for assessment of control algorithms and MPC applied to buildings.
- Iago Cupeiro: PhD student for already one year supervised by Lieve Helsen. Looks into the MPC formulation and is going to apply MPC in real buildings. Focused now on state estimator.
- Lisa Rivalin: ENGIE-Axima employee. She is now working in LBNL with David Blum and Michael Wetter for one year and she will be there still for one more year. Main interest in data-mining and MPC.
- Kzrysztof Arendt: Postdoc at Syddansk Universitet (SDU), DK. For one and a half year working on MPC. His main task is to provide models. He uses Modelica and black box models from python. Uses also pyfmi for model interchange and co-simulation. Wants to investigate MPC implementation in large building with lots of measurements. Currently working on a study with parameter estimation and effect of model accuracy on MPC performance. Was 6 months at LBNL.
- Valentin Gavan: Working in ENGIE Lab. Work related to buildings heating and cooling at district level, including thermal networks. Using Modelica. He can share models. He does not work directly with MPC but plans to be more active in this field in the near future.
- David Blum: postdoc at LBNL. His supervisor is Michael Wetter. Developing emulators for testing, model library for MPC and a Python platform for implementing MPC (pyMPC). Focus on making MPC scalable
4.2 AGREEMENT ON TASKS (TASK 1.2.1, 1.2.2, 1.2.3) AND HUMAN RESOURCES.
Lieve asks whether at this stage there are any tasks to be added to the three tasks that are listed in the google doc? https://docs.google.com/document/d/1OoPAIO3qfjUBx_kLH5B_rvWbqsWVBQdHDtoTSYJ9ZB0/edit?usp=sharing Answers:
- Kzrysztof: the goal is very broad. He proposes to go more specific. Lieve clarifies that we should first define the KPI and then compare the different approaches of each partner. How broad we go, depends on the partners that participate. Since the IBPSA Project 1 does not bring in funding, this work is expected to be framed in other on-going (funded) projects. Then we agreed on the task leaders for each of the subtasks:
- Javier Arroyo as task leader for Task T1.2.1.
- Iago Cupeiro as task leader for Task T1.2.2.
- Dave Blum and Michael Wetter (Filip may be too busy now with finalizing his PhD) as task leader for Task T1.2.3. Lieve explains that the Task Leaders will not be bothered by much administration. It will be more about task coordination, content, active participation. For an overview see: https://docs.google.com/document/d/1HqDdz421Wn9ylR8HCvoK9x781F9CWJPa3-h6gnit6HU/edit?usp=sharing
4.3 MAIN INSIGHTS FROM ENQUIRY RESULTS - DYNAMIC OPTIMIZATION IN MODELICA: APPLICATIONS, RECENT DEVELOPMENTS AND FUTURE CHALLENGES (DAVE BLUM)
Dave presents his work of a ‘paper in preparation’ about dynamic optimization (DOP) in Modelica. Modelica is not used that much for optimization. He tries to find out the reasons and the way to tackle this issue. For this purpose, he has gone through different groups of experts asking different questions:
- What are the State-of-the-art, tool-chains, and applications in the field?
- What are the current barriers for Modelica based dynamic optimization? Are there any other libraries that link Modelica with optimization tools? Right now there is just JModelica with a direct collocation algorithm (that may not be the best). Only few solvers, we may need new dynamic optimization solvers).
- Improved convergence needed?
- What is needed to establish it in industry and academia?
- Scalability is of big relevance as well as defining benchmarks and different expectations. We need benchmarks for scale studies.
- Further research and development is needed to push the limits of the system size (e.g. symbolic elimination framework). Different approaches would be needed for large systems.
- What about MILP, MINLP. There is no link with Modelica! Currently tight integration with algebraic modeling languages like GAMS.
- Reverse flow in DOP. Very important for different buildings but difficult.
- DOP data input (e.g. weather, spline functions or tables): very important, how does it affect optimization performance?
- Promising approaches: system simulation, Runge Kutta, shooting? Adjoint?
- They are developing a scalable approach for solvers: MUMPS, MA27, MA57.
- Conclusion and summary: is collocation the right method? Scalability is an issue to tackle. There is a lot of promise to use optimization in Modelica for energy dynamics. Lieve brings to the table the adjoint method that others are applying already in the context of topology optimization in heat exchangers and thermal networks. With this method the problems can be much larger, but constrained optimization seems to be a problem there. It would be interesting to invite optimization experts to our meetings in order to learn from their knowledge and experience (also in other application fields).
4.4 MORE DETAILED DISCUSSION ON TASKS
The discussion is then redirected back to the tasks: In Task 1.2.1 we should define a benchmark for control algorithms assessment. The agreed approach is to gather an inventory of emulator models that we have available now. We split them into hydronic and air based systems (main difference between the systems in Europe and US) and increase the complexity stepwise:
- Single zone residential building
- Multi-zone residential building
- Single zone office building (was added because of typical buildings in the US)
- Multi-zone office building
- Multi-zone office building with hybrid systems
Lieve proposes that the candidate emulators are presented in Berlin to decide which ones to use in the benchmark (after some modifications whenever needed), and to look into the level of detail. Also the energy system and associated control complexity is increased by increasing the complexity of the building, as brought up by David. Kzrysztof: EnergyPlus may be preferred for larger systems. Do we plan to use EnergyPlus in this project? SDU has emulators in EnergyPlus, but is also prepared to switch to Modelica. Michael: EnergyPlus does not allow the level of detail we need for non-MPC simple control sequences. Conclusion: We use Modelica for the benchmarks (Task 1.2.1). The existing case (EnergyPlus emulator controlled by MPC) may bring insights in task 1.2.2 about MPC formulation. However, the MPC comparative study will be performed using the common Modelica emulators. These are the benchmarks for testing controllers BOP-TEST = Building operation performance test (or was it ‘building optimization test?). This BOP-TEST should be accessible (Filip: use FMU and put them on-line) and well documented such that it becomes the internationally recognized and accepted benchmark. LBNL knows how to get it into the ASHRAE standards.
4.5 HOW TO GET STARTED?
- We start by collecting the information that is already available now by creating google docs:
- Google doc collecting information about emulators
- Google doc collecting information about useful performance indicators for controller assessment
- Google doc collecting information about MPC formulation
- Google doc collecting information about other optimization experts that can either join the IBPSA Project 1 (work in the field of energy systems in buildings) or can be invited to share their optimization knowledge (may work in other application fields)
-
These google docs are completed by all of us and serve as a starting point for the discussion. During the next Skype meeting (January) we’ll have a first look at the google docs and decide what to present en discuss at the Berlin meeting (February). 4.6 BERLIN MEETING
-
At the meeting in Berlin we can have:
- Presentations of: Available emulators MPC approaches used today
- An in-depth discussion on : What is available? What should be developed? By whom? Template for documentation of emulators Which performance indicators are preferred? The amount of data needed to develop the controller model is also an important aspect. Also the toolchain performance should be looked at. Which level of model detail is needed? Which MPC approaches should we implement and compare. Which objective functions, observers… ? Which additional experts do we invite? Publication and dissemination plan. This is clear for the library (open source). Which approach for the benchmarks BOP-TEST? Towards an ASHRAE standard? And the recommendations for MPC? Code of good practice?
4.7 BRAINSTORM ABOUT ADDITIONAL OPTIMIZATION EXPERTS TO BE INCLUDED
Extended to next meeting
4.8 WRAP UP AND NEXT MEETING
The main conclusions are:
- We have three Task Leaders: Javier, Iago, Dave/Michael
- Four google docs need to be completed by all of us (ASAP but in any case a week before the next Skype meeting, let’s say this year ;-)):
- Available emulator models https://docs.google.com/document/d/1uT17uEteoXmaDZVYm0DxGODje4Pxfe7oqKPCF4oIhmM/edit?usp=sharing
- Performance indicators for MPC Toolchain and Algorithm evaluation https://docs.google.com/document/d/1cON-KdJ7BFzSFODDW3JfYeLIm5iWKtTaktCOU1wgRF0/edit#heading=h.ub8zw98vrz07
- MPC formulations https://docs.google.com/document/d/1dyDA1t4eXMeW7-IvmvAW4D8UDeOEypeAq4CrqyTjdtM/edit?usp=sharing
- Propose extra optimization experts. https://docs.google.com/document/d/1twdi04jCLmhYcPEidqQvysYiemmekyYyNOaFc0B698I/edit?usp=sharing
- These google docs are the starting point for planning break-out sessions to be held in the Berlin meeting and in-depth discussions.
- Next skype meeting in January. Please complete the doodle before November 29 (agenda and call-in details will follow): https://doodle.com/poll/8yp27gkfvwayk5q9