Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker / Container Support #786

Closed
regnets opened this issue Feb 20, 2020 · 87 comments · Fixed by #1056
Closed

Docker / Container Support #786

regnets opened this issue Feb 20, 2020 · 87 comments · Fixed by #1056
Assignees
Labels
💡 enhancement-ideas New feature or change request 💻 hardware support This issue refs tickets/issue introducing/fixing some hardware support

Comments

@regnets
Copy link

regnets commented Feb 20, 2020

Is your feature request related to a problem? Please describe.
Ich würde gerne Raspberrymatic mit Phoscon (eine Zigbee Bridge alternative von Dresden Elektronik) parallel auf einem Raspberry Pi nutzen.

Describe the solution you'd like
Ideal wäre es, wenn es hierfür eine Docker Integration geben würde, dann lässt sich beides leicht aktualisieren und sie kommen sich nicht in die Quere.

Describe alternatives you've considered
Bisher keine. Ich warte einfach auf die Integration bzw. würde für dieses Ticket ein Bounty aufsetzen als Incentive.

Additional context
Mir ist bewusst, dass es hierzu schon einige Tickets #192, #248, #357 gegeben hat. Das letzte hat die Integration von Raspberrymatic auf die x86 Plattform bzw. für virtualisierte Umgebungen ermöglicht (siehe auch https://homematic-forum.de/forum/viewtopic.php?f=65&t=54055#p538104). Jedoch ist hierbei auch unter dem Tisch gefallen, dass es noch keine Docker Integration gibt.

@regnets
Copy link
Author

regnets commented Feb 20, 2020

Ich habe hierfür eine Bounty auf Bountysource angelegt:
https://www.bountysource.com/issues/88798894-raspberrymatic-docker-support-dockerhub-repository

@jens-maus
Copy link
Owner

Danke für das Enhancement Request Issue. Das gaze wird jedoch kein einfaches Unterfangen werden und ist nicht so einfach umzusetzen wie die integration als x86/ova variante. Ich lasse das Ticket aber trotzdem geöffnet falls hier noch mehr feedback dazu gegeben werden möchte oder sich jemand findet der daran arbeiten möchte.

@jens-maus jens-maus added 💡 enhancement-ideas New feature or change request 💻 hardware support This issue refs tickets/issue introducing/fixing some hardware support 🔅 low priority This issue/ticket has low priority (not urgent) ❓ undecided No decision to accept or reject ticket yet labels Feb 24, 2020
@jens-maus jens-maus changed the title Raspberrymatic Docker Support/Dockerhub Repository. Docker Support/Dockerhub Repository. Feb 24, 2020
@denisulmer
Copy link

Hätte auch großes Interesse an einem Docker-Image.

@jens-maus: Wo liegen deiner Meinung nach die größten Hürden im Vergleich zu einer ESXi / OVA Lösung? Eventuell kann ich mich demnächst damit beschäftigen.

@jens-maus
Copy link
Owner

@denisulmer Nun, um es "RaspberryMatic-like" bzw. "buildroot-like" zu machen müsste man erst einmal eine buildroot docker version generieren lassen und schauen zu welchen einschränkungen es da kommt bzw. was man da anpassen müsste um diese zu umgehen. Danach könnte man sich dann an die Portierung des OCCU teiles in RaspberryMatic machen. Eine schnelle Lösung wird es sicherlich nicht geben vermute ich. Alles etwas fleißarbeit.

@mpietruschka
Copy link
Contributor

Kann man nicht das ganze RaspberryMatic-Root-Image einfach in einem Docker-Image bereitstellen und das INIT-System im CMD ausführen? SystemScripte müssten dann ausgeklammert werden.

@jens-maus
Copy link
Owner

@mpietruschka Kannst du gerne mal probieren und entsprechend berichten. Ich vermute aber das das nicht einfach so out-of-the-box funktionieren wird und auch nicht die endlösung sein kann weil man das docker image ja sauber via buildroot erstellen lassen sollte. Aber wie gesagt, tobt euch aus und berichtet oder schickt nen pull request.

@mpietruschka
Copy link
Contributor

mpietruschka commented Jun 16, 2020

Wenn es rein nach der Docker-Philosophie geht, müsste jeder Dienst in einem gesondertem Container bereitgestellt werden. Das hat in meinen Augen nur Sinn, wenn diese Dienste auch gesondert geupdated werden. Das ist hier nicht zu erwarten.

Gibt es denn schon einen Docker Ansatz für RaspberryMatic? Und kann ich das reine rootfs für armv7 irgendwo herunterladen?

Ps. Gibt es einen Chat-/Austausch-Kanal? Dann müssten hier nicht alle Grundsatzfragen geklärt werden. Es sei denn, es ist gewünscht :)

Gruß, Marti

@jens-maus
Copy link
Owner

Also eine einzelne trennung der Dienste ergibt bei RaspberryMatic keinen Sinn. Das würde ich nicht angehen weil am schluss hat das sonst eh nix mehr mit RaspberryMatic zu tun. Ich hab aber schon einen groben plan wie man vorgehen könnte. Im Groben wird das wohl sowas werden wie das hier: https://github.com/AdvancedClimateSystems/docker-buildroot. D.h. man bastelt etwas um die buildroot umgebung von RaspberryMatic herum und lässt buildroot dann am schluss ein rootfs generieren was man dann in docker reinwerfen kann. Inwieweit man da auf widerstand stößst kann ich natürlich schwer abschätzen. Muss man einfach mal testen, etc.

Und wenn du das rootfs armv7 haben willst, lad halt das *.img herunter und extrahier das rootfs. Oder du nutzt das *-ccu3.tgz, da ist das rootfs als separate datei drin. Aber wie gesagt, ich halte es für keinen guten ansatz einfach das fertige rootfs von RaspberryMatic zu nehmen und dann das zu extrahieren was man braucht. Umgedreht sollte man das machen, die buildroot umgebung so anpassen das am schluss ein rootfs für die docker nutzung erzeugt wird das man dann direkt in docker importieren kann. Dann lässt sich das schön sauber in die CI umgebung von RaspberryMatic integrieren und man kann auch gleich automatisiert docker versionen für x86 mit ausliefern (wenn es denn klappt).

@mpietruschka
Copy link
Contributor

mpietruschka commented Jun 16, 2020

Ich glaube ich kann deinen Gedanken nachvollziehen: Ein zweckentfremdetes System-Build würde bei jeder Änderung ggf. zu einer Anpassung des Docker-Images führen. Stichwort langfristige Wartung.

Die Unterschiede stelle ich mir allerdings garnicht so groß vor (derzeit jedenfalls ;). Wenn es erst läuft wird es sicherlich einige Funktionen in der WebUI nicht geben können. Bspw.

  • Netzwerkkonfiguration
  • Reboot
    ...

Man müsste aber gut sehen können welche Anpassungen für Docker gemacht werden mussten. Darauf ließe sich dann ein dediziertes Buildroot anfangen. Ist das die Richtung in die du möchtest?

Angelnu hat bereits ein Docker-Image gebaut. Dabei holt er dein Repo und die Resourcen die er gebrauchen kann. Ist das so wie du es nicht haben wolltest?

https://github.com/angelnu/docker-ccu

@angelnu
Copy link
Contributor

angelnu commented Sep 1, 2020

In fact I have switched my main installation to RaspberryMatic since I love some of the extra features added by @jens-maus and I did not have the passion to keep backporting them all to the original CCU base.

So I would be able to generate a docker container out of a tarball containing the CCU filesystem. Then there are a few things I disable when running in a container that do not make sense or are not possible into a container (loading modules, configuring the HDMI, etc). I also do a one-time install of the pivccu device driver on the host to support all the HW devices that require extra modules (that must match the host kernel).

@jens-maus - if you agree I would propose to contribute my docker support into your project and I would discontinue my stand alone version: I do not really see much value on using the vanially CCU firmware... Let me know if you are interested.

btw: Ich spreche auch Deutsch aber für teschnische Sachen bin auf English besser vertraut.

@jens-maus
Copy link
Owner

@angelnu This sounds great and I would be indeed interested in your docker support stuff. In fact, it would be great to get at least your build scripts where you extract everything from the vanilla CCU firmware into a docker environment and get it somewhat ported over to RaspberryMatic so that we can perhaps add a mechanism that takes the final RaspberryMatic buildroot generated tar archive with the filesystem similar to the CCU and then extract all the docker relevant things and build the docker image in one run. For this we would then indeed also use GitHub Actions to get it more smoothly integrated.

@angelnu
Copy link
Contributor

angelnu commented Sep 1, 2020

Good, then let us get rolling @jens-maus :-)

In fact the build part is pretty simple: https://github.com/angelnu/docker-ccu/blob/master/Dockerfile

Most I do there is download and extract the original CCU and then get some of your patches so it would be "just" the lines after 52.

So some questions to get moving:

  • how do we get a tarball containing the raspberrymatic filesystem tarball? Would you add another target to your makefile?
  • I add some patches to the filesystem (see https://github.com/angelnu/docker-ccu/tree/master/additions). Is it possible in buildroot to add some patches only for the docker outputs? I could still apply the patches on the Dockerfile but it might be cleaner to do in the buildroot step.
  • what architectures would we support? If we generate tarballs for x86 and arm/arm64 then we can get containers for all those platforms. I can also generate a meta muti-architecture container that references them so users could run docker run raspberrymatic independently of the platform.
  • do you have a build Id to upload to docker hub (or any other container regristry)? If not I would create one. We can then use my docker-ccu docker repo or create a new one -> no preferences there.
  • for the host setup I use https://github.com/angelnu/docker-ccu/blob/master/deploy.sh. Basically it installs pyvccu device drivers (so we can support any HW supported by it), add a few udev rules to ensure docker gets accesss to the HW adapters without being privileged and then generate the docker run options to start. The last part (starting docker) is depending on what container orchestration is being used: docker, swarm, Kubernetes. My proposal would be to have a folder in your repo with the script and generate a tarball in the release. This only needs to be executed once.
  • (advance) do you have any testcases? we might test the resulting containers with Github actions by using the pivccu dummy adapter.

@martinpichlo
Copy link

Hi,

all this sounds great. Did you made any progress the last 2 month? Is there something where I could help?

Martin

@angelnu
Copy link
Contributor

angelnu commented Nov 1, 2020

Not yet - I was hoping @jens-maus could chime in for the above questions - specially generating a tarball with his build is the pre-req to generate a docker image out of it.

@ProfDrYoMan
Copy link

Any progress? Added $10 on Bountysource. :)

@angelnu
Copy link
Contributor

angelnu commented Dec 31, 2020

@jens-maus - I have a few days to work on this before being back to work. The main question for me to start is if you want to produce tarballs from the buildroot (that I would pull in my project to build the docker image) or if you want to merge my docker steps in your project.

Since I am not so familiar with your project structure I would start with "just" copying the intermediate tarball from a local build in my project and progress to upload a docker image for testing.

@jens-maus
Copy link
Owner

If possible I would of course prefer to merge your work into this repository! So please go this road.

@angelnu
Copy link
Contributor

angelnu commented Jan 1, 2021

Good - it is also my preferred option since I personally use RaspberryMatic and therefore I am not able to give a good support for the official homematic versions.

Ok, I will prepare a PR for your repo.

@jens-maus jens-maus added 🙏 help wanted Extra attention is needed and removed ❓ undecided No decision to accept or reject ticket yet labels Jan 2, 2021
@jens-maus jens-maus added this to the future release milestone Jan 2, 2021
@jens-maus jens-maus linked a pull request Jan 3, 2021 that will close this issue
7 tasks
@jens-maus jens-maus pinned this issue Jan 3, 2021
@angelnu
Copy link
Contributor

angelnu commented Jan 14, 2021

I am also running in "production" in K8S with 100+ devices (mix of Homematic and Homematic IP wireless devices) and 2 LAN gateways and no problem - running on Intel is a really welcomed performance boost!

I was looking at the HB-RF-ETH as a way to achieve high availability of the Raspberrymatic side - if one my Kubernetes nodes die Raspberrymatic gets re-deployed. Currently I have to plug USB PCBs to each of the nodes and with the HB-RF-ETH I can just deploy 1 (and another spare) and that is independent from the location and number of Kubernetes nodes. Sounds paranoid but I do not want to have my home automation down for any reason :-)

@hanzoh
Copy link

hanzoh commented Jan 15, 2021

Good morning,
tonight’s update broke the connection to my HB-RF-USB:

Identifying Homematic RF-Hardware: HmRF: none, HmIP: none, OK
Updating Homematic RF-Hardware: no GPIO/USB connected RF-hardware found

No other issues in docker logs.

a successful start from yesterday looked like this in dmesg, I have no such message at all today:

[Thu Jan 14 02:01:37 2021] raw-uart raw-uart: Reset radio module
[Thu Jan 14 02:01:44 2021] eq3loop: created slave mmd_hmip
[Thu Jan 14 02:01:44 2021] eq3loop: created slave mmd_bidcos
[Thu Jan 14 02:01:46 2021] eq3loop: eq3loop_open_slave() mmd_bidcos
[Thu Jan 14 02:01:52 2021] eq3loop: eq3loop_open_slave() mmd_hmip
[Thu Jan 14 02:01:52 2021] eq3loop: eq3loop_close_slave() mmd_hmip
[Thu Jan 14 02:01:52 2021] eq3loop: eq3loop_open_slave() mmd_hmip
[Thu Jan 14 02:01:52 2021] eq3loop: eq3loop_close_slave() mmd_hmip
[Thu Jan 14 02:01:52 2021] eq3loop: eq3loop_open_slave() mmd_hmip

@hanzoh
Copy link

hanzoh commented Jan 15, 2021

I saw that there are two newer builds, so I manually updated to 3.55.5.20210114-67aab13.
This is working again!

@robologo1
Copy link

Hello angelnu,

thanks for your support for the issue Problems with starting with the HmIP-RFUSB #36 . As suggested, I tried to use this image (ghcr.io/jens-maus/raspberrymatic:snapshot), unfortunately without success.

During the start of this container everything looks fine:
grafik

The HmIP-RFUSB device was found and even updated. But in the RaspberryMatic console I only see the Virtual Remote Control:
grafik

And with this I'm not able to pair any HMIP device.

Do you have any suggestions? Or do you need further information? Please tell me, which logs you need. Thanks!

Regards

ROBOlogo

@jens-maus
Copy link
Owner

jens-maus commented Jan 15, 2021

thanks for your support for the issue Problems with starting with the HmIP-RFUSB #36 . As suggested, I tried to use this image (ghcr.io/jens-maus/raspberrymatic:snapshot), unfortunately without success.

Right out of your shown screenshots I can only spot that the HmIP-RFUSB is correctly identified and seems to work. If the pairing dialog counts down in time the HmIP-RFUSB works fine.

And with this I'm not able to pair any HMIP device.

Do you have any suggestions? Or do you need further information? Please tell me, which logs you need. Thanks!

Well, if your HmIP-RFUSB is not able to find any HmIP device during pairing, this can have several different reasons. One being that the HmIP-RFUSB is a suboptimal rf module device and I would suggest to use a RPI-RF-MOD connected to a HB-RF-USB/HB-RF-ETH in general. Or you should use a USB adapter cable and not connect the HmIP-RFUSB directly to the device you are running the docker an. So make sure the HmIP-RFUSB is located 1-2 meters away from the system you run the docker on and then try again.

@angelnu
Copy link
Contributor

angelnu commented Jan 15, 2021

The boot sequence looks good.

@robologo1 - just confirming - you did click on Learning Devices and you can see HMIP there and start the learning process. Then during the 60 seconds you put a device in learning mode but you do not get it discovered. Correct?

If that is the case then as @jens-maus says the most likely problem has to with a not optimal reception for your adapter. Another reason would be that your host is running too much or is being slowed down (example: Raspberry 4 without any cooling) so the container is getting too much lag.

@robologo1
Copy link

@angelnu, @jens-maus - Thanks for your quick response. :) I tried a lot of things and found out, that the issue was not related with the installation of the USB adapter, but only with the way I tried to reset the sensors to factory defaults. Because all my windows contacts were already paired to my old installaiton and needed to be reset before the new pairing. Now it works perfectly! Thanks again!!! 👍

@angelnu
Copy link
Contributor

angelnu commented Jan 18, 2021

good to hear :-) For next time (or those reading this) - the backup/restore works fine so it is possible to backup from another system, either HW RaspberryMatic or my previous angelnu/ccu container and restore in the new Raspberrymatic OCI without having to re-pair any devices.

I also debugged a problem related to 64 bit kernels and remote filesystems at #903 If anyone runs with this combination you should be aware of it. For glusterfs I added a circumvention in my Kubernetes Helm chart (WIP) at https://github.com/angelnu/helm/tree/master/raspberrymatic

@jens-maus
Copy link
Owner

@angelnu Thanks for the hint. BTW: Would be great if you could finish/complete your K8s documentation in the wiki anytime soon :)

@angelnu
Copy link
Contributor

angelnu commented Jan 19, 2021

jaja, now that you say. I first thought documenting it the "classic way" with an example yaml to deploy but then I thought on directly providinga Helm chart but then I wanted to add to your CI so it could be automatically updated so I started investingating the Github acctions available...

This weekend :-)

But being serious - if anyone here is interested in testing my Helm chart and even would also like to test with GlusterFS for High Availability I would appreciate it. It does not feel right releasing something it has been tested by me only. If not here I would ask the home assistant community - I also contributed to their Helm chart in the past.

@yoogie27
Copy link

Moving from a stable FHEM + HM-CFG-LAN adapter setup to dockerized CCU + HM-CFG-LAN. Initially I had issues getting the adapter to work, but after a while and a lot of patience, it connected. I have no encryption key though. The adapter has a stable IP. Yet, the CCU loses connection to it every now and then. It used to be stable for a day or two, but now it is always disconnecting, raising the "RF-Gateway-Alarm" alert.
When I restart the puck, it will go to connected again for a while until it drops...

Nothing in the logs.. And I did not deploy the kernel modules as my adapter is LAN..

Any tipps?

@jens-maus
Copy link
Owner

@yoogie27 I would suggest trying first a CCU+HM-CFG-LAN setup without docker to rule out a general problem with the HM-CFG-LAN in combination with a CCU. You are here in the alpha/test issue regarding dockerization of a CCU and this is still alpha/beta quality. And my suspicion is, that the HM-CFG-LAN is simply not working well with a CCU. That's why it is also not BTW listed in the documentation as a supported rf gateway device (https://github.com/jens-maus/RaspberryMatic/wiki/Einleitung#vorraussetzungen). Never been tested and if you are the first one, try it with a real CCU system before you go the docker way.

@nicx
Copy link

nicx commented Jan 19, 2021

@jens-maus @yoogie27 I am running my virtual CCU environments for years successfully with 3 HM-CFG-LAN Gateways for years now (Coming friom the original CCU2, trying Homegear a few months, then more than a year using Debmatic before I switched to the docker CCU about a month ago) without any issues. therefore, i cannot confirm jens-maus's suspicion at all, quite the opposite. ;)

@jens-maus
Copy link
Owner

@nicx Are these really HM-CFG-LAN gateway or are these HM-LGW-O-TW-W-EU devices?

@nicx
Copy link

nicx commented Jan 19, 2021

I am really talking about the very old HM-CFG-LAN-Gateways, the small round ones. Never had any problems with them.

@yoogie27
Copy link

Thanks @nicx and @jens-maus . The oCCU page lists HM-LAN-CFG as supported as well. At least without OTA firmware updates. So I would suspect that rfd is supporting it.

I actually found that rfd is logging into /var/log/messages and there it says that rfd could not connect to host... I will continue investigating. Hopefully I can get it working.

@jens-maus
Copy link
Owner

Then why did nobody tell me about it that they work with RaspberryMatic? ;-) Then it's probably your job now to help @yoogie27 :)

@yoogie27
Copy link

@jens-maus @nicx Hm.. In the end it's an issue with my HM-CFG-LAN... When I unplug the power, it works for a while. I will continue digging. Great project, by the way!

@angelnu angelnu mentioned this issue Jan 20, 2021
7 tasks
@yoogie27
Copy link

@jens-maus @nicx Hm.. In the end it's an issue with my HM-CFG-LAN... When I unplug the power, it works for a while. I will continue digging. Great project, by the way!

I think I know what's wrong. I have watchtower deployed to keep my containers up to date. Whenever there is a new snapshot, it jumps ship and updates everything. After that, the old HM-CFG-LAN bug kicks in that renders the puck unusable. Only reflashing the latest firmware brings it out of the state. Powercycling is not enough.

I am fed up, so I have ordered the bundled Raspberry with the radio module and a clock module to run RaspberryMatic on it.

@nicx
Copy link

nicx commented Jan 21, 2021

@yoogie27 just fyi: I cannot confirm theses problems. I am using watchtower, too, and all my 3 HM-CFG-LANs are working without this problems. So maybe yours is just defect ;)

@jens-maus jens-maus removed the 🙏 help wanted Extra attention is needed label Jan 21, 2021
@jens-maus
Copy link
Owner

Please note that I will close this ticket/issue here now, since the general implementation of the docker/container integration is done and seems to work flawlessly. All recent discussions in here (note, that this is no discussion fora!) are not really related to the docker/container integration but specific to using the old/obsolete HM-CFG-LAN devices with RaspberryMatic.

So please allow me to close this issue/ticket and thank all the contributors and especially @angelnu for starting the whole docker implementation in first place. I am sure this will be appreciated by a lot of other users waiting already for a longer time for such an additional virtualization opportunity. So thanks again!

And last, not least: Anyone who contributed to the initial Bounty (https://www.bountysource.com/issues/88798894-raspberrymatic-docker-support-dockerhub-repository), e.g. @regnets, @ProfDrYoMan, @mpietruschka should please allow @angelnu to mark this bounty as "solved" and thus receive the money for it!

@jens-maus jens-maus unpinned this issue Jan 21, 2021
@nicx
Copy link

nicx commented Jan 22, 2021

@jens-maus fyi: the docker tag "latest" is still missing, so following the install documentation will currently not work ;)

@jens-maus
Copy link
Owner

@nicx I know. But you know that this haven't been released yet, right? So simply wait for the final release version to appear somewhat next week. And then you should of course stop using the snapshot tag for your productive environment.

@angelnu
Copy link
Contributor

angelnu commented Jan 22, 2021

@jens-maus has been working on this as much at least at me polishing my first draft - not sure if the Bounty allows it but @regnets, @ProfDrYoMan, @mpietruschka you should consider @jens-maus as the target for your donations for all the work he did here and does in general to run this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
💡 enhancement-ideas New feature or change request 💻 hardware support This issue refs tickets/issue introducing/fixing some hardware support
Projects
None yet
Development

Successfully merging a pull request may close this issue.