Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Initial support for containers #1056

Merged
merged 40 commits into from
Jan 5, 2021
Merged

Conversation

angelnu
Copy link
Contributor

@angelnu angelnu commented Jan 3, 2021

Description

Adds support to build Open Container Initiative ( as Docker) containers.

Related Issue

#786

Types of changes

  • Docs change / refactoring / dependency upgrade
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Alternate Designs

  • building just the tarball and keeping the docker wrapper in current angelnu/ccu git repo.
  • extract files from ova images using qemu tools

Both are circumvention since they would patch RaspberryMatic after it has been built which makes very difficult not running in broken builds.

Possible Drawbacks

Kernel headers must be kept as old as possible to support running on host with older kernels. For example the Sinology NAS is still using kernel 4.4 which is the oldest currently supported by buildroot.

The old headers are needed in the OCI images AND the multilib. Currently I did not create new definitions for multilib since I do not see what problem would be using the older headers there but it could be done if anyone sees an issue.

Verification Process

Started the container on my amd64 workstation which uses kernel 4.19. All init error problems are gone and the UI comes up sucesfully. I still need to test with actual RF devices and build&test on arm.

Beyond that a CI will need to be contributed that uploads to a container repository. I propose to use https://quay.io/repository/angelnu/raspberrymatic (since docker hub new policy is not great)

Release Notes

  • Container images available for amd64, arm and arm64

Contributing checklist

  • My code follows the code style of this project.
  • I have read the CONTRIBUTING and LICENSE document.
  • I fully agree to distribute my changes under Apache 2.0 license.

@angelnu angelnu changed the title Oci [WIP] Initial support for containers Jan 3, 2021
@jens-maus jens-maus added 💡 enhancement-ideas New feature or change request 💻 hardware support This issue refs tickets/issue introducing/fixing some hardware support labels Jan 3, 2021
@jens-maus jens-maus added this to the future release milestone Jan 3, 2021
@jens-maus jens-maus linked an issue Jan 3, 2021 that may be closed by this pull request
@angelnu
Copy link
Contributor Author

angelnu commented Jan 3, 2021

Build command

make PRODUCT=raspmatic_oci_amd64 release -j 8

Run command

docker run -ti --name ccu -v ccu_data:/usr/local -p 8080:80 --privileged --restart=always --hostname homematic-raspi raspberrymatic:amd64-latest

Notes:

Boot output

Identifying onboard hardware: oci, OK
Initializing RTC Clock: onboard, OK
Running sysctl: OK
Checking for Factory Reset: not required
Checking for Backup Restore: not required
Initializing System: OK
Starting logging: OK
Identifying Homematic RF-Hardware: .....HmRF: HM-MOD-RPI-PCB/HB-RF-USB@usb-0000:00:14.0-1, HmIP: HM-MOD-RPI-PCB/HB-RF-USB@usb-0000:00:14.0-1, OK
Updating Homematic RF-Hardware: HM-MOD-RPI-PCB: 2.8.6, OK
Starting irqbalance: OK
Preparing start of hs485d: no Hm-Wired hardware found
Starting xinetd: OK
Starting eq3configd: OK
Starting lighttpd: OK
Starting ser2net: no configuration file
Starting ssdpd: OK
Starting NUT services: disabled
Initializing Third-Party Addons: OK
Starting LGWFirmwareUpdate: ...OK
Setting LAN Gateway keys: OK
Starting hs485d: no Hm-Wired hardware found
Starting multimacd: .OK
Starting rfd: .OK
Starting HMIPServer: .....OK
Starting ReGaHss: .OK
Starting Third-Party Addons: OK
Starting crond: OK
Setup onboard LEDs: booted, OK

Please press Enter to activate this console.

Tests done

  • Build release
  • Start new container
  • Update existing container - data is kept
  • Reset to factory - reboot works and WebUI first-setup is launched
  • Backup and Restore
  • Teaching new devices
  • Import backup from another CCU

Known bugs:

None so far - web UI comes up and I can learn new devices

@jens-maus
Copy link
Owner

Thanks and welcome back again ;-) This already looks quite promising. Nevertheless, as this is a major feature please give me some time to test and integrate it thoroughly. So thanks that you added [WIP] because I think this might take some time to integrate and test it for all potential use-cases (HB-RF-USB as well as GPIO bus uses, etc. etc.) as well as testing potential issues with third-party ccu addons, etc.

@angelnu
Copy link
Contributor Author

angelnu commented Jan 3, 2021

on the container repository - I thought github was charging but it does not to open repositories - so same policy as quay.io with open source :-) I have not tried github actions yet but seems quite standard CI so I should get it figured out to add to the PR (at least you want to do that part yourself).

EDIT
I understand it will take some time - no rush - I will let it run in my K8S cluster to see how it goes but it boots/feels very fast. Ideally enabling daily builds first before making a public release so any breaking change can be caught.

On tests - being docker containers it should be possible to add some automated testcases. I tried to start without any device but Rega did not boot. Maybe with the dummy raw_device? It is an impressive project with all the features and supported HW so manual testing would be a challenge....

@jens-maus
Copy link
Owner

I have not tried github actions yet but seems quite standard CI so I should get it figured out to add to the PR (at least you want to do that part yourself)

Feel free to generate a template workflow action yml if you like. But I guess for the final github packages setup I would need to take over anyway to get it smoothly integrated into the build+development workflow here in the main repository so that the final distributed packages will be part of the jens-maus/RaspberryMatic github project. See, btw, here for some example yml files to deploy docker images to the new github container registry (ghcr.io): https://github.com/github/super-linter/tree/master/.github/workflows (I will also try to train myself the knowledge of how to publish the docker images to the new github container registry which seems to me a proper place for the potential future raspberrymatic docker images.

@jens-maus
Copy link
Owner

I understand it will take some time - no rush - I will let it run in my K8S cluster to see how it goes but it boots/feels very fast. Ideally enabling daily builds first before making a public release so any breaking change can be caught.

This is also my plan, indeed. So first getting this thing stabilized to some extent and the publish first development docker images via github container registry so interested users can test it thoroughly.

On tests - being docker containers it should be possible to add some automated testcases. I tried to start without any device but Rega did not boot. Maybe with the dummy raw_device? It is an impressive project with all the features and supported HW so manual testing would be a challenge....

It is challanging, yes. And indeed, after we have that thing out we can reuse these docker images for generating proper testcases. You probably have noticed, but I already have a occu testing repository which @hobbyquaker kindly developed and I took over (see https://github.com/jens-maus/occu-test). I already use that to test my changes on ReGaHss itself.

@angelnu
Copy link
Contributor Author

angelnu commented Jan 4, 2021

I created a test oci-ci branch in my fork to test there the docker build before merging into the main PR

instead. This should end up in using eq3configcmd to setup
/etc/config/netconfig correctly so that eq3configd stops complaining
about a not correctly setup network interface.
@angelnu
Copy link
Contributor Author

angelnu commented Jan 4, 2021

I would probably add them to S06InitSystem for a start, because this is already used for some device related setup stuff. So if you add the linking of all common devices from /dev to /dev_host this would be great. Thanks!

  • Ok, adding there.
    EDIT -> found that we only need to mount devtmpfs within the container so no additional dev:dev volume sharing is needed. Also only the 2 multimac dynamic devices are needed.

I can add a deploy.sh script that would get uploaded to the release for the basic docker setup on a Debian based system. It would, as you suggest, install the pivccu dkms module and start the container as daemon and with data volume.

If you can add something like a deploy.sh script for a starting point to develop a somewhat starting script to setup all dependencies on the host and start it accordingly, this would be great. Thanks. Later on we could then even think about generating a debian-like package around that which debian-based users can then install more easy and which will install everything automatically and come with a start script that even starts the raspberrymatic transparently.

  • Ok, adding first draft of script and uploading to artifacts

Ok, I could add rules to download from there for nighties or to also upload the OCIs tars that are inputs for the following job also to artifacts in nighties.

Yes, please do so! This is hightly appreciated.

  • Will add
    EDIT -> done

Ok, perfect. Would would also be good would be to also get some kind of nightly developer builds for the docker/oci platform and upload these images to the public docker registry as well, probably in a different namespace to distinguish them from the stable/release builds. And if possible, probably we should also put the image files into the snapshots release tag like the other nightly build platforms so that interested docker developers can more easily download and use them.

@angelnu
Copy link
Contributor Author

angelnu commented Jan 5, 2021

@jens-maus - you will have to create a Personal Access Token for the Container Repository called CR_PAT. See https://docs.github.com/en/free-pro-team@latest/packages/guides/migrating-to-github-container-registry-for-docker-images#authenticating-with-the-container-registry

I have configured to build&push the containers for schedule and push events but not for PRs (since the secrets do not get passed to PRs from forks). You can see how they look at https://github.com/users/angelnu/packages/container/package/raspberrymatic

@jens-maus
Copy link
Owner

This looks very good @angelnu. Thanks again a lot for all your work to get the RaspberryMatic docker story finally going 👍

Now with the new direct mounting of /dev_host. Can you please adapt the documentation/cmd-line in your above comment (#1056 (comment)) until you have the example deploy.sh ready.

So what is then actually missing what might prevent this PR from being merged to master? AFAICS the principle docker functionality seems to work nicely so I could merge this to master to get it tested by some users and then we/you can send in other PRs for adapting some specific things if required. What do you think?

setup dependencies correctly. This will, however, use a stripped down
version of eQ3StartNetworkk which should not touch the eth0 interface
whatsoever
@angelnu
Copy link
Contributor Author

angelnu commented Jan 5, 2021

Thanks @jens-maus for all your work in RaspberryMatic and being open for working with me in this big feature!

I think the only two missing parts are the deploy.sh and a Wiki page to document the advanced setups. I will update the comment above for the time being so IMO this PR could be merged.

Then we need testers :-) Would you announce this as alpha platform somewhere?

I would also like to announce in angelnu/ccu that I plan to discontinue support for the containers built on the official CCU and point people to RaspberryMatic as the successor. If anyone would have interest on the official one I would remove all my previous webui patches which tend to break when eq3 release new versions. But IMO I do not see too much reason to use the original one when RasberryMatic is a superior superset.

EDIT

I also want to test that the upgrade within the container works BUT usually containers are updated from the host so Ideally the update from within WebUI should be disabled and just indicate that the contaner must be updated from the host (running deploy.sh again)

It is also unusual running a terminal with password in a container which cannot be accessed in daemon mode. It does not hurt too much but it adds unneeded stuff to the log (and consume some small amount of mem?). Any reason to keep it?

@jens-maus
Copy link
Owner

I think the only two missing parts are the deploy.sh and a Wiki page to document the advanced setups. I will update the comment above for the time being so IMO this PR could be merged.

Ok, perfect. Then I just wait for the CI checks to finish and then I will squash+merge your PR on the master and will apply some streamlining, if required. Thanks again, and please continue to submit enhancements and potentially required updates to the docker stuff of RaspberryMatic using future PRs. I will definitely review them quickly and merge them with priority.

Then we need testers :-) Would you announce this as alpha platform somewhere?

Yes, I will announce this in the respective issue ticket as well as open an appropriate thread in the public homematic-forum.de. Do you have an account there as well so I can put reference to you and give you appropriate credits? In addition, I will of course suggest you for receiving the collected bounty (even thought is is only $20 atm) linked in the issue ticket (see https://www.bountysource.com/issues/88798894-raspberrymatic-docker-support-dockerhub-repository)

I would also like to announce in angelnu/ccu that I plan to discontinue support for the containers built on the official CCU and point people to RaspberryMatic as the successor. If anyone would have interest on the official one I would remove all my previous webui patches which tend to break when eq3 release new versions. But IMO I do not see too much reason to use the original one when RasberryMatic is a superior superset.

Thanks for the support! Indeed, I also think that with this new raspberrymatizied docker :) implementation you could perfectly end your own endeavour without regretting it. But be assured that I will be happy to keep you as the maintainer of the docker environment as much as you like. So any future ticket closely related to the docker stuff will be certainly routed to you, if you don't have a problem with that.

@jens-maus
Copy link
Owner

I also want to test that the upgrade within the container works BUT usually containers are updated from the host so Ideally the update from within WebUI should be disabled and just indicate that the contaner must be updated from the host (running deploy.sh again)

Thanks for the reminder. I will review that as soon as everything is merged on the master branch. But I cannot say yet, if I really will drop the webui based update alltogether. While for a usual docker user redeploying the docker would indeed be the right approach, for some users it might also be desirable to performa a webui-based docker update (or in case of CI testing). So lets see. But I will certainly keep that in mind, yes.

It is also unusual running a terminal with password in a container which cannot be accessed in daemon mode. It does not hurt too much but it adds unneeded stuff to the log (and consume some small amount of mem?). Any reason to keep it?

It was a mutual thing to enable it when you disabled the getty stuff :) But I agree, running the raspmatic-docker as a daemon doesn't make sense to keep the /bin/login in /etc/inittab. But please hive me some time to think about it once more and see if there might be some reasons to keep it. But at least I would keep it as a comment in /etc/inittab so that one can know easily how to re-enable it.

@angelnu
Copy link
Contributor Author

angelnu commented Jan 5, 2021

My user in the homematic forum is vegetto I am not very active there since I usually focus my activity directly in github. But I would register for updates in the docker threat.

I am happy to keep supporting the docker variant so sure, assign me any issue open in GitHub. I will try to add some testcases where after a sucessful build I would try to start the container and check that the WebUI comes online. But this would go into its new PR :-)

On the updates: I modified the script called during the updates so that hopefully it works - I have not tested it yet. For plain docker users should work and advanced ones should know already so IMO we do not need to disable the WebUI and just document in the corresponding Wiki page.

And sure, you decide on the login - when you re-add it I was not sure if something in Raspberrymatic depended on it so I did not try it for that reason. I am not blocked so I did not raise up at that point.

EDIT: CI completed :-) The containers were not built in the main project since it is a PR but they look good at https://github.com/users/angelnu/packages/container/package/raspberrymatic. Just remember adding the PAT secret before you merge in your repo and enable the container repo feature (it is beta so disabled by default)

@jens-maus jens-maus merged commit 7032899 into jens-maus:master Jan 5, 2021
@jens-maus
Copy link
Owner

Ok merged. Thanks again! And you definitely earn most of the credits to get this finally done. So here it is: 🥇 :)

And please continue to send in more such nice PRs, especially to potentially get the docker support streamlined until the next final release (end of jan 21).

@nicx
Copy link

nicx commented Jan 6, 2021

Great news, thanks for that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
💡 enhancement-ideas New feature or change request 💻 hardware support This issue refs tickets/issue introducing/fixing some hardware support
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Docker / Container Support
3 participants