Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add various servers and rework caching solution #25

Open
wants to merge 45 commits into
base: master
Choose a base branch
from

Conversation

matthewh86
Copy link
Contributor

@matthewh86 matthewh86 commented Aug 25, 2021

No description provided.

matthewh86 and others added 30 commits August 21, 2018 20:11
Also simplified FULL_ADMINS
The ones specified required the left4downtown plugin
Copy link
Collaborator

@sirsquidness sirsquidness left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your extensive work here! Lots of overdue cleanups and upgrades :)

On the caching system, I want to refine the design a bit more.

It seems to me that your use case you're solving is ... you build your images on the same server that you use them on, and you don't want to re-download the 15GB-or-so CSGO every time the base image is updated. Does that sound correct?

I use these docker images with Kubernetes and local private dockerhubs (at least I did back before the pandemic when live events still existed). So I build, push to dockerhub over network, and 1 or more kube servers pull the image. So image layering efficiency is important to me - both in terms of total size, and how long it takes to push an update. (this is also why I use timestamp tags). 15GB images take a lot of time to pull over the network and extract on disk, and when Valve drops a patch in the middle of a tournament, time matters to me.

In the solution in this PR, every time there's a game update, the ADD cache/ . layer in the dockerfile would be totally rebuilt. In the case of CSGO, this means a brand new 15GB layer every time.

Here's some pseudo code that I think would solve both your use case and mine at the same time:


# new script: update_cache.sh or something
steamID= $(cat csgo/steamid)
if steamID != ""
   # This could use a docker container, or could use steamcmd on the local system, doesn't matter
   steamcmd_download $steamID csgo/.cache
endif

# inside build_docker.sh
# Makes sure .cache exists, even if empty, so that the docker file can use `COPY` on it - the dockerfile doesn't know if the
# directory exists or not, except that the build fails if it doesn't, so we have to make sure it exists even fi the cache isn't populated
mkdir -p $game/.cache

# Dockerfile for csgo
# Add the cache - this could be empty, or it could be a slightly out of date copy of the game, or a fully up to date copy
# Updating the cache will invalidate the layer cache of this, so updating the cache causes the entire image to be effectively rebuilt
COPY--chown steam .cache/ /steam/csgo
# Run the steamcmd update, so even if the cache was out of date or empty, there will be an up to date copy of the game
# This layer will get cached as is by docker, and guarantees useful cached layer exists for updates to be based on
RUN steamcmd update $csgoappid -dir /steam/csgo

# So if there's a game update, and you don't want to distribute a brand new 15GB layer and want to do an incremental update,
# set the build arg CACHE_DATE to `$(date)` and this last layer will contain just the diff between the existing docker image and
# the update. If there are no updates to make, this layer is effectively a no-op
ARG CACHE_DATE
RUN steamcmd update $csgoappid -dir /steam/csgo

so as a use, I'd do something like

./update_cache csgo
./docker_build csgo

This slightly different approach has benefits for both of our use cases!

For my own use case, it allows me to make the most efficient use of docker image layers - small CSGO update is really fast to build and distribute. I always have a steam cache running, and rarely update the base image, so for my own workflows this means I can opt out of using the cache and just use the networked steam cache instead.

In general, buliding the cache before building the docker image means...

  • you get to pick when you update the cache
  • which means successive docker builds become idempotent unless you update the cache - running docker_build.sh csgo twice in a row will result in building the image the first time, and using all cached layers the second time
  • and also means there's no post-build step on every build to copy the game back out of the built image to the cache (big time saver in the case of CSGO!) (which also helps keep successive rebuilds idempotent)
  • Small updates to the image can rely on the CACHE_DATE build arg instead of the local cache, which can be a lot faster.

Would love to hear your thoughts! Does it solve your use case? Does it align with the vision you had for the PR?

This PR is much appreciated and I look forward to merging it once we're converged on a caching flow that works for both of us

## Troubleshooting

### I can't see the LAN Server!
* Try disabling `Automatic Metric` and entering `1` for your LAN/WiFi adapter
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume you mean in the windows network adapter properties on the client device? Worth being a bit more specific, assume most people don't know what this is or where to find it. Alternative advice: unplug/disable other network connection (ethernet, vpns, etc)

ADD *.zip /tm/
RUN ls /tm/
RUN unzip /tm/*.zip
RUN curl -o TrackmaniaServer_2011-02-21.zip http://files2.trackmaniaforever.com/TrackmaniaServer_2011-02-21.zip && \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

&& rm Trackmania*.zip to save some space in the layer


sed -i "s/%%MASTER_SERVER_LOGIN%%/$MASTER_SERVER_LOGIN/" /tm/GameData/Config/dedicated_cfg.txt
sed -i "s/%%MASTER_SERVER_PASSWORD%%/$MASTER_SERVER_PASSWORD/" /tm/GameData/Config/dedicated_cfg.txt
sed -i "s/%%MASTER_SERVER_KEY%%/$MASTER_SERVER_KEY/" /tm/GameData/Config/dedicated_cfg.txt
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know if a blank value for AdminPassword etc disables access, or if it means unauthenticated access is allowed? This isn't a regression either way - it would be doing exactly what the existing code does, just something that I noticed while reading this

CMD ["./start.sh"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like most of the dockerfiles you've touched have been standardised on using entrypoint rather than cmd... seems reasonable to make them all entrypoint

Comment on lines +15 to +17
RUN chown -R steam /steam/${GAME}/ \
&& mv .cache/* . \
&& rm -fr .cache
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this workflow feels like it's going to make the images twice as big as they need to be... there will be a layer with the cached content at /steam/css/.cache and a layer with it at at /steam/css. Have you experienced that in your testing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having a multi line RUN command will only create one layer. This is a bit of a moot point though as the caching "solution" I've added turns out to be quite slow locally.

I'm thinking that mounting a volume and running steamcmd update on the ENTRYPOINT rather than in the build will reduce image size, build time, and data usage. Especially with subsequent builds that change the underlying image layers.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're meaning that you'd keep the CSGO or whatever game files outside of the container and mount them in using a volume, that's not gonna fly here. The game files must be inside the image, and must be installed/updated during image build - anything else isn't really suitable for what this repo is trying to achieve

Perhaps a solution that would work well for you - you can use a regular LANCache type thing so that the game install goes super quick. No need for complicated or hacky solutions here - they'll still install super fast. You could even have the cache run on your local docker installation if you wanted to avoid the cost of going over gigabit network. The steamcmd updates support using steamcache no problems.

RUN mkdir -p /steam/${GAME}/.cache/ \
&& touch /steam/${GAME}/.cache/empty
WORKDIR /steam/${GAME}/
COPY .cache .
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Making this COPY --chown steam .cache/ . (or maybe .cache/*?) could negate the need for the next RUN block entirely

docker_build.sh Outdated
else
echo "No existing master '.cache' for ${i} found."
fi
docker build . -t ${i} || exit 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old build scripts tagged with a timestamp, as well as explicitly with :latest. I personally find the timestamps useful for my kubernetes-based workflows so I'd like to keep that functionality. But this has implications for dangling image cleanup. Do you have strong feelings either way on the topic?

To catch updates since the original image was built, run:

```
docker build --build-arg CACHE_DATE="$(date)"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the CACHE_DATE arg was removed from the dockerfile, so this bit of the readme is out of date... unless we put the CACHE_DATE stuff back

@matthewh86 matthewh86 closed this Aug 29, 2022
@matthewh86 matthewh86 reopened this Aug 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants