Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

POC: cargo watch #610

Open
wants to merge 5 commits into
base: develop
Choose a base branch
from
Open

POC: cargo watch #610

wants to merge 5 commits into from

Conversation

lache-melvin
Copy link
Collaborator

@lache-melvin lache-melvin commented Feb 20, 2024

POC: using cargo watch to restart the development server on changes!

RnD Card

You'll need to install cargo-make, cargo-watch and (maybe: see below) systemfd globally:

cargo install cargo-make cargo-watch systemfd

Note: I'd hoped we could add these to the backend Cargo.toml - I think it's nicer when everything comes set up for you when you clone, rather than another setup step with the global install. However, I can't figure out how to do this, as the backend Cargo.toml is a virtual manifest, which can't have dependencies?

To now start the backend in watch mode, from the backend dir:

cargo make watch

Or, from the repo root:

yarn start-backend:watch

This starts two process in parallel:

  • watch_build watches for any changes in our Rust source code, and executes cargo build. When the build is finished (and it compiled correctly), it creates a .trigger file.
  • trigger_server_start watches for changes on this .trigger file, and restarts the server. This means the current version of the API remains available until the new version is compiled, and the restart after trigger is fairly quick.

About the packages:

  • cargo-make: manages the scripts
  • cargo-watch: watches for file changes, and executes provided commands
  • systemfd: opens sockets and passes them to another process, allowing that process to restart itself without dropping connections
    • cargo-watch stops the existing server before starting the new version. There's a couple seconds of downtime, and we can get dropped requests in that time.
    • systemfd spawns a socket that our Rust server can then bind to, meaning we avoid the request-dropping. It keeps the endpoint alive until the new server is ready... i.e. the request hangs for the restart time, rather than returning and error.
    • In order to support this, there is a code change in the server startup code to check for a tcp listener, and listen on this if available. Otherwise, it resorts to binding to the pre-defined port as we had before.
    • Feedback wanted here! Because we're not restarting the server until it's already compiled, the gains of this change are fairly minimal... I mean its nice and seamless, but how much do we care if we get an Error: Couldn't connect to server for a second or two?
    • I've also left a cargo make watch_with_downtime (not using the systemfd stuff) so please try this out and let me know your thoughts!
    • If we do go the systemfd way, I also need to figure out how to get the port from the configuration into the makefile? Have hardcoded 4007 for now...

rust-embed = "6.4.2"
mime_guess = "2.0.4"
futures-util = "0.3"
simple-log = { version = "1.6" }
listenfd = "1.0.1"
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah my formatter had fun in this file, it's just this line adding listenfd for the TCP listener that actually changed here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant