Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow alerting on external endpoints that do not receive a push within a configurable time frame #741

Open
onedr0p opened this issue Apr 19, 2024 · 2 comments
Labels
area/alerting Related to alerting feature New feature or request

Comments

@onedr0p
Copy link

onedr0p commented Apr 19, 2024

For step 3. , will it also include the negative "Said endpoint must automatically check if an alert has not been triggered" i.e. raise an alert if no result received within a specified period ?
(aka push-based monitoring)

Originally posted by @r3mi in #722 (comment)

@TwiN TwiN added feature New feature or request area/alerting Related to alerting labels Apr 27, 2024
@TwiN
Copy link
Owner

TwiN commented Apr 27, 2024

Thank you for creating the issue.

Now that external endpoints have been implemented (#722, #724), this should probably be the next feature related to external endpoints that gets implemented as they kind of go hand in hand, especially for those using external endpoints to test connectivity. If there's no connectivity, Gatus' API won't be reachable, which means that Gatus wouldn't be able to trigger an alert without this feature.

The feature in question should allow the user to configure a duration under which an update is expected to be received.

Should the duration elapse with no new status update, a status should be created to indicate a failure to receive an update within the expected time frame.

This should in turn cause

func HandleAlerting(endpoint *core.Endpoint, result *core.Result, alertingConfig *alerting.Config, debug bool) {
if alertingConfig == nil {
return
}
if result.Success {
handleAlertsToResolve(endpoint, result, alertingConfig, debug)
} else {
handleAlertsToTrigger(endpoint, result, alertingConfig, debug)
}
}
to be called, which would then lead to handleAlertsToTrigger being called (due to the new result indicating failure to receive an update having its Success field set to false), incrementing NumberOfFailuresInARow
func handleAlertsToTrigger(endpoint *core.Endpoint, result *core.Result, alertingConfig *alerting.Config, debug bool) {
endpoint.NumberOfSuccessesInARow = 0
endpoint.NumberOfFailuresInARow++
for _, endpointAlert := range endpoint.Alerts {
& triggering whichever alerts should be triggered.

The only proper name I can think of for this feature is "dead man's switch", but as silly as it may sound, I don't like how that'd look on the configuration:

external-endpoints:
  - name: ...
    dead-man-switch:
      blackout-duration-until-automatic-failure: 1h
    alerts: 
      - type: slack
        send-on-resolved: true

Another consideration to make is the interaction between this feature and maintenance. While the maintenance period should prevent alerts from being triggered, should failure status be pushed anyways? Perhaps this should be an additional parameter on the maintenance configuration (e.g. maintenance.silence-dead-man-switch)?

Some food for thoughts.

@onedr0p
Copy link
Author

onedr0p commented Apr 27, 2024

The only proper name I can think of for this feature is "dead man's switch", but as silly as it may sound, I don't like how that'd look on the configuration:

I've seen other services call this heartbeat instead of dead man's switch, and they also have a grace period that is configurable.

external-endpoints:
  - name: ...
    heartbeat:
      interval: 5m
      grace-period: 5m
    alerts: 
      - type: slack
        send-on-resolved: true

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/alerting Related to alerting feature New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants