Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to login using a Docker image #149

Open
kang-makes opened this issue Mar 3, 2024 · 10 comments
Open

Unable to login using a Docker image #149

kang-makes opened this issue Mar 3, 2024 · 10 comments
Labels
question A question about Davis and how it works stale This issue or PR hasn't been updated in a while

Comments

@kang-makes
Copy link

kang-makes commented Mar 3, 2024

I had to do an ugly hack to make this software works on Docker/Kubernetes because the reverse proxy was unable to find the files.

These are configurations:

apiVersion: v1
kind: ConfigMap
metadata:
  name: davis
data:
  APP_ENV: "prod"
  APP_TIMEZONE: "Europe/Madrid"
  CALDAV_ENABLED: "true"
  CARDDAV_ENABLED: "true"
  WEBDAV_ENABLED: "false"

  DATABASE_DRIVER: "postgresql"

  AUTH_REALM: SabreDAV
  AUTH_METHOD: Basic

  DATABASE_URL: "postgres://davis:<REDACTED>@postgres:5432/davis?serverVersion=15&charset=UTF-8"

  ADMIN_LOGIN: "admin"
  ADMIN_PASSWORD: "<REDACTED>"
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx
data:
  nginx.conf: |
    worker_processes auto;
    error_log stderr warn;

    events {
      worker_connections  512;
    }

    http {
      include       /etc/nginx/mime.types;
      default_type  application/octet-stream;

      keepalive_timeout 30;

      upstream davis-sidecar {
        server localhost:9000;
      }

      server {
        server_name _;
        listen 80;
        access_log off;

        root /var/www/davis/public/;
        index index.php;

        rewrite ^/.well-known/caldav /dav/ redirect;
        rewrite ^/.well-known/carddav /dav/ redirect;

        charset utf-8;

        location ~ /(\.ht) {
          deny all;
          return 404;
        }

        location /health {
          add_header Content-Type text/plain;
          return 200 'ALIVE';
        }

        location / {
          try_files $uri $uri/ /index.php$is_args$args;
        }

        location ~ ^/index\.php(/|$) {
          fastcgi_pass    davis-sidecar;
          fastcgi_split_path_info ^(.+\.php)(/.*)$;
          include fastcgi_params;

          fastcgi_param   SCRIPT_FILENAME  $document_root$fastcgi_script_name;
          fastcgi_param   PATH_INFO        $fastcgi_path_info;
          fastcgi_split_path_info  ^(.+\.php)(.*)$;
        }

        location ~ \.php$ {
          return 404;
        }
      }
    }

Then, this is a snippet of the manifest I am using to deploy it.

      initContainers:
      - name: migration-runner
        image: ghcr.io/tchapi/davis:v4.4.1
        command:
          - /var/www/davis/bin/console
          - doctrine:migrations:migrate
          - --no-interaction
        envFrom:
          - configMapRef:
              name: davis
      - name: public-populator
        image: ghcr.io/tchapi/davis:v4.4.1
        command:
          - ash
          - -c
          - find /var/www/davis/public -maxdepth 1 -exec cp -rv {} /new-public \;
        volumeMounts:
          - mountPath: /new-public
            name: public
      containers:
      - name: davis
        image: ghcr.io/tchapi/davis:v4.4.1
        envFrom:
          - configMapRef:
              name: davis
        volumeMounts:
          - mountPath: /var/www/davis/public
            name: public
      - name: nginx
        image: nginx:1.25-bookworm
        volumeMounts:
          - mountPath: /etc/nginx/nginx.conf
            name: config
            subPath: nginx.conf
          - mountPath: /var/www/davis/public
            name: public
      volumes:
        - name: config
          configMap:
            name: nginx
        - name: public
          emptyDir: {}

In case you are not familiar with Kuberentes, I can explain you:

At volumes I specify a empty volumes where I am going to save the files at public so the proxy and the php-fpm container share the same folder.

The other configMap is uses as key-value and mapped as environment variables using this:

        envFrom:
          - configMapRef:
              name: davis

On initContainers I run one-shots before leaving the other long running containers. The one shots copy everything in public to the empty values I explained above and the other one do the migrations so I do not have to do it manually. Each restart/upgrade it is going to migrate automatically.

Well... It work, but I am unable to log in. I create, using admin a user that is administrator:
image

If I use the admin dashboard it says that the credentials are invalid:
image

But if I use the same credentials to /dav I am able to log in:
image

If I go to the log folder, it is completely empty.

❯ K -n k-davis exec -ti davis-6d7c76c5bd-cn57g -c davis -- ash
/var/www/davis # cd var/log/
/var/www/davis/var/log # ls -lha
total 1K     
drwxr-xr-x    2 www-data www-data       2 Jan 14 16:14 .
drwxr-xr-x    4 www-data www-data       4 Jan 14 16:14 ..

Here are all the question/suggestions that I am facing during the deployment of this service:

I don't know if the log folder I am looking into is correct. I know that is documented but it seems odd that is /var/www/davis/var/log instead of /var/log/davis.

On the other hand, having log files is an antipattern for Docker images as (ideally) docker images should be run as read only (and adding volumes to the folder that should write something like session files or vcards or whatever) and logs should not write to a file but to stdout/stderr so you can use docker logs to look at the logs of the container.

As the reverse Proxy needs also the files, having them in two images seems to be also an issue. I see a closed issue here and a roadmap card here. I am not a PHP expert so I cannot help you to develop it but I can give you a helping hand on the Docker image and the deployment of it if you give me an status update on this and guide me a bit on the approach you are looking for.


Now, about the issue itself. I do not know how to debug this issue with the password because I do not know if the issues is in the way I am copying the files or the backend itself as I have no logs at all of what is happening.

I am happy to help to debug it.

@tchapi
Copy link
Owner

tchapi commented Mar 3, 2024

Hi @kang-makes and thanks for the comprehensive issue

I don't know if the log folder I am looking into is correct. I know that is documented but it seems odd that is /var/www/davis/var/log instead of /var/log/davis.

It's correct: /var/www/davis/var/log. As you haven't experienced any errors (as per application error), it's legitimately empty for a production instance (we use the fingers_crossed handler — more info here)

On the other hand, having log files is an antipattern for Docker images as (ideally) docker images should be run as read only (and adding volumes to the folder that should write something like session files or vcards or whatever) and logs should not write to a file but to stdout/stderr so you can use docker logs to look at the logs of the container.

You can modify the log file location with LOG_FILE_PATH to suit your needs (ex: /dev/stdout) — the default is actually the standard Symfony location. I'd welcome a PR to update the examples docker-compose files as I believe they should exhibit this behavior as you point out.

As the reverse Proxy needs also the files, having them in two images seems to be also an issue. I see a closed issue #144 (comment) and a roadmap card here. I am not a PHP expert so I cannot help you to develop it but I can give you a helping hand on the Docker image and the deployment of it if you give me an status update on this and guide me a bit on the approach you are looking for.

It's only a Docker thing to be honest, there's no PHP involved. I didn't find the time to start on it yet, but my approach would be to run caddy as a reverse-proxy in the container with supervisord doing the process managing stuff, as it's pretty standard nowadays.

Now, about the issue itself. I do not know how to debug this issue with the password because I do not know if the issues is in the way I am copying the files or the backend itself as I have no logs at all of what is happening.

You're trying to log to the admin dashboard with a user account. The admin account is a single account that uses the ADMIN_LOGIN and ADMIN_PASSWORD env vars. Caldav has a notion of "administrators" which is not the same (I agree it can be confusing).

@tchapi tchapi added the question A question about Davis and how it works label Mar 3, 2024
@tchapi
Copy link
Owner

tchapi commented Mar 4, 2024

First try is #150 is you want to have a look

@kang-makes
Copy link
Author

kang-makes commented Mar 5, 2024

Hi!

Thank you very much for answering this fast and start working on it.

I am kind of busy these days and I cannot spend time on this but I'll review it this Thursday/Friday.

Cheers :)

@tchapi
Copy link
Owner

tchapi commented Mar 17, 2024

@kang-makes would you be so kind as to test https://github.com/tchapi/davis/pkgs/container/davis-standalone and see if it fits your needs?

Thanks

@kang-makes
Copy link
Author

Hi @tchapi!

I am not ignoring you. I just got sick last week.

The thing is I don't like the approach you are taking, and, to open a good conversation, as talk is cheap, I prefer to talk over a PR. So I created the PR #152 .

As I am the one that wants this functionality and have the knowledge to develop it, I would like to help. I cannot help on the PHP side, but I can on the Docker one.

The PR #152 is an iterative approach. Instead of a gazillion of docker images with duplicated code, the point is to create a slim image that is used as base to install supervisor/webservers on top of it.

As a first approach, I started with a multi-stage build for Docker. The PR still misses CI, build and push. it will take me still a couple of days. We can discuss over the PR once I finished if you would.

Take a look and let me know what you think :)

@kang-makes
Copy link
Author

kang-makes commented Mar 21, 2024

Ok, @tchapi. I have it.

I'll give you a tour.


First it comes the PR #152: It is finished and ready to be reviewed.

It seems that the image is being build two times but I added the lines for cache so it should use Github cache service to cache all the layers, making the build almost instantaneous after the first build.

I ran the workflows on my fork and here you have a couple of examples. Notice the 20 minutes of difference between the first build and the other 3 builds:
https://github.com/kang-makes/davis/actions/runs/8350454484
image
https://github.com/kang-makes/davis/actions/runs/8350729913
image

Once that is merged, building and publishing images should be really fast and you are testing the build at PR time. You might add integration tests if needed. I do not know how to do that in PHP but I can help you to do the boilerplate if you teach me.


Then, the second PR adds the flavor to the images, so you can have an image tagged v4.0.0, other tagged v4.0.0-nginx, and another one tagged v4.0.0-caddy.

You can see the difference between the PR #152 and the following one on my fork that I left it as an example kang-makes#4

Note that it is an example you you can understand the approach I had in mind of having a PHP-FPM image ans using that image as base to add the webserver on top.

In case you need to develop or debug the image locally, you just have to build the base one

image

And as I shuffled the layers a bit adding the code in the last layer possible, the build between iterations in case you develop over Docker should be blazing fast.


Let me know if you need any further question about my approach, we are ready to discuss it. If you like, you can merge review and/or merge #152 and I'll finish kang-makes#4 so you can release a new version with caddy and nginx :D

@tchapi
Copy link
Owner

tchapi commented Mar 22, 2024

Hello,

Thanks for your work on this. I'll admit I have mixed feelings:

The thing is I don't like the approach you are taking

Thanks for the feedback, though 😅

Instead of a gazillion of docker images with duplicated code, the point is to create a slim image that is used as base to install supervisor/webservers on top of it.

I think that's a little overstated — There is one package to date in this repository, with one flavor per architecture, so I'm not sure I get your point. Where are these gazillion of docker images?

The actual point of having Docker packages in this repository is to provide users with a simple way to install Davis on their stack without having to build anything. Going the "combined docker image" route means that I'll provide two packages instead of one:

  • the same as now (without a reverse-proxy)
  • one with Caddy (that I called davis-standalone)

I won't provide a Nginx flavor, for instance, as it's an implementation detail that is irrelevant if you want to use the combined Docker image itself (and to Davis as a piece of software in general). Moreover, having many options is a maintenance burden that I won't have appetite to support.

On the tags vs. packages difference, I have no strong opinion.

About PR #152

  • Your image is now 149M vs. 162M, so there's indeed some shaved space in there, mainly due to the extension-builder layer. I haven't tested to run the image itself as I know it won't work since you miss a few source files, but that's easily fixable. I'm wary that some dev dependencies might also be used as runtime though, so this will need extensive testing.
  • There's no point in having the composer layer as a different layer: the binary is 2.8M and is needed in the final image anyway
  • I don't understand the changes in the CI files, and specifically why you removed the digest parts?

General remarks on your rationale

making the build almost instantaneous after the first build.

Yes and no, since we're relying on a base image (8.2-fpm-alpine) and packages that are not pinned, because we want to benefit from upstream changes. So it will be faster if you iterate fast, but not on a longer timeframe unfortunately.

... and you are testing the build at PR time

Sure, if we want to build the images at PR time, which I don't see the benefits of. I was fine with a 20min build now since it's only made at release time, and I'm not releasing 4 times per day (barely once per month). Given that the base image will likely have been updated between two releases, the cache won't be used (or wanted).

Note that it is an example you you can understand the approach I had in mind of having a PHP-FPM image ans using that image as base to add the webserver on top.

I understand it yes. This could be a good thing indeed. I'm not sure on how to fill in the $BASE_IMAGE when building locally in a docker compose context though, if you care to show me?

Side note: Github workflows files are very cumbersome and hard to read / understand and maintain, so I won't add a lot more complexity in there if possible.

TL,DR;

→ All in all, I can see the benefits of building the PHP extensions in a separate layer 👌🏼 (but keep in mind it's only a very small 8% of space saving)
→ Building the combined image FROM the base image is a nice to have, I'd like to see how it works locally with compose

@kang-makes
Copy link
Author

kang-makes commented Apr 1, 2024

Hi!

The thing is I don't like the approach you are taking

Thanks for the feedback, though 😅

I might be harsh here. I did not mean to. English is not my first language. Sorry.

Instead of a gazillion of docker images with duplicated code, the point is to create a slim image that is used as base to install supervisor/webservers on top of it.

I think that's a little overstated — There is one package to date in this repository, with one flavor per architecture, so I'm not sure I get your point. Where are these gazillion of docker images?

I meant layers, not images. The more layers you have, the less performant is on clusters that have too many images. And here I do not want to talk as if I was running this on the most busy server in the universe but a home server with an RPi could start to struggle after a few images that are not well packaged.

I won't provide a Nginx flavor, for instance, as it's an implementation detail that is irrelevant if you want to use the combined Docker image itself (and to Davis as a piece of software in general). Moreover, having many options is a maintenance burden that I won't have appetite to support.

On the tags vs. packages difference, I have no strong opinion.

Fair enough.

making the build almost instantaneous after the first build.

Yes and no, since we're relying on a base image (8.2-fpm-alpine) and packages that are not pinned, because we want to benefit from upstream changes. So it will be faster if you iterate fast, but not on a longer timeframe unfortunately.

Here I made a wrong assumption. You stated "The actual point of having Docker packages in this repository is to provide users with a simple way to install Davis on their stack without having to build anything." and it is a valid point. The assumption I made is that you might develop this application on Docker itself so you work and test in the same image you are deploying (As I am used to).

This is the main reason I also changed the layer order to include the code as late as possible so you do not have to rebuild anything for each test.

... and you are testing the build at PR time

Sure, if we want to build the images at PR time, which I don't see the benefits of. I was fine with a 20min build now since it's only made at release time, and I'm not releasing 4 times per day (barely once per month). Given that the base image will likely have been updated between two releases, the cache won't be used (or wanted).

Fair point.

Note that it is an example you you can understand the approach I had in mind of having a PHP-FPM image and using that image as base to add the webserver on top.

I understand it yes. This could be a good thing indeed. I'm not sure on how to fill in the $BASE_IMAGE when building locally in a docker compose context though, if you care to show me?

Sure:

docker build -t davis:devel .
docker build -t test -f Dockerfile.caddy --build-arg BASE_IMAGE=davis:devel .

Side note: Github workflows files are very cumbersome and hard to read / understand and maintain, so I won't add a lot more complexity in there if possible.

I can help you if you need to.

All in all, I can see the benefits of building the PHP extensions in a separate layer 👌🏼 (but keep in mind it's only a very small 8% of space saving)

As I said before and did not explain correctly, the point was not only to save a few bytes but also to reduce the amount of layers. Also to reduce the amount of files that might be in layers that might not be accessible on the last one. So you do not need to squash.


I am used to working by giving the review of PRs directly on the PR itself. Like this. I saw that you have some concerns about the PR of missing files that are needed on the final image and I have removed them. The same for some CI steps.

If you want, you can add there all the comments with all the concerns you have so I can address all the issues you see in separate threads so we can follow the conversation without issues. I would like to address these issues and merge PRs one at a time so it is not a PR that has 1000 lines changed that is hard to read. After this one that already includes LDAP and IMAP, we can iterate and add Caddy on a subsequent PR.

Side note: Github workflows files are very cumbersome and hard to read / understand and maintain, so I won't add a lot more complexity in there if possible.

I have experience with clusters, Docker, and CI. I can help you if you want. As you can see, I have a cadence of 2 weeks when I can find time/energy to use on FOSS but I can help. You can ping me on any change that you need to do the review, always respecting your view.

Let me know what you think :)

@kang-makes
Copy link
Author

Any news on this?

Should I still work on this or just test your approach and see it if fits my needs?

@tchapi
Copy link
Owner

tchapi commented May 3, 2024

Sorry, I have been taken by a lot of pressing matters which did not allow me to contribute further.

For now, I'd be open to merge the weight / nb of layers reduction in the Dockerfile as you proposed in #152; It needs a few changes though (mainly my comments above: composer in the final layer, no change in CI files + adding back the missing app files in the image). Also, the Dockerfile should remain in the same folder, not go to the root.

Once we have that baked in we can have a look at the rest, but I'm more strongly leaning towards not complexifying things with a base image + another image on top to be honest

@tchapi tchapi added the stale This issue or PR hasn't been updated in a while label Jun 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question A question about Davis and how it works stale This issue or PR hasn't been updated in a while
Projects
None yet
Development

No branches or pull requests

2 participants