-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve Kestrel connection metrics with error reasons #55565
base: main
Are you sure you want to change the base?
Conversation
@@ -1727,7 +1727,7 @@ public async Task AbortConnectionAfterTooManyEnhanceYourCalms() | |||
await WaitForConnectionErrorAsync<Http2ConnectionErrorException>( | |||
ignoreNonGoAwayFrames: true, | |||
expectedLastStreamId: int.MaxValue, | |||
expectedErrorCode: Http2ErrorCode.INTERNAL_ERROR, | |||
expectedErrorCode: Http2ErrorCode.ENHANCE_YOUR_CALM, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@amcasey I added a HttpConnection.Abort
overload that has an Http2ErrorCode
argument. When there are excessive resets I made the passed in value ENHANCE_YOUR_CALM
instead of INTERNAL_ERROR
. That seems a more appropriate error.
Did you choose INTERNAL_ERROR
for a reason here, or is it returned because that is what abort already sent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a hotfix, so I suspect I was making the smallest change possible, but I no longer recall.
7c05f97
to
8a35061
Compare
What was the motivation to add the data initially? I can hypothesize a reason someone might find it useful (diagnosing why connections didn't live as long as they were expected to live) but I don't know how often someone needs to diagnose that issue in practice and what alternatives would they have available if this metric attribute didn't exist?
Any attempt at classification gets very subjective but a few suggestions at least:
|
src/Servers/Kestrel/Core/src/Internal/Http/Http1OutputProducer.cs
Outdated
Show resolved
Hide resolved
I think it's valuable to know the end reason even if it's not an error and even if we don't have a specific scenario in mind. The change looks great from semconv perspective! LMK if you need any help adding new attributes to otel. |
This seems like data that should be going into logs not metrics. Metric dimensions should be kept small as adding additional properties and values adds incremental storage. If we can reduce this data down to one property - was the connection closed cleanly or due to an error, and provide logs to give more details as to why. Metrics are supposed to be about aggregated statistics not detailed analysis as to what happened. Anyone who is going to be doing analysis, can the use the metric to determine if the trend is changing, and then go to a log query to understand why. Once exemplars are better supported in .NET/OTel they will provide the correlation between metrics and traces/logs. |
There is prior art for showing more error info in
Note that while the
That is similar to how there are many HTTP status codes, but most of them you'll never see. |
@@ -64,6 +64,7 @@ public HttpConnection(BaseHttpConnectionContext context) | |||
// _http2Connection must be initialized before yielding control to the transport thread, | |||
// to prevent a race condition where _http2Connection.Abort() is called just as | |||
// _http2Connection is about to be initialized. | |||
_context.ConnectionFeatures.Set<IProtocolErrorCodeFeature>(new ProtocolErrorCodeFeature()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you may already have explained, but why isn't this needed for http/3 as well?
break; | ||
case TimeoutReason.RequestBodyDrain: | ||
case TimeoutReason.TimeoutFeature: | ||
Abort(new ConnectionAbortedException(CoreStrings.ConnectionTimedOutByServer)); | ||
Abort(new ConnectionAbortedException(CoreStrings.ConnectionTimedOutByServer), ConnectionEndReason.ServerTimeout); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want a separate reason for RequestBodyDrain
? That sounds like it might be about OS timing that we don't control?
CancelRequestAbortedToken(); | ||
} | ||
|
||
void IHttpOutputAborter.OnInputOrOutputCompleted() | ||
{ | ||
_http1Output.Abort(new ConnectionAbortedException(CoreStrings.ConnectionAbortedByClient)); | ||
_http1Output.Abort(new ConnectionAbortedException(CoreStrings.ConnectionAbortedByClient), ConnectionEndReason.TransportCompleted); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little nervous we're going to get a bug saying there was a spike in TransportCompleted
and we'll need to know which of these it was.
@@ -72,7 +72,7 @@ protected override Task OnConsumeAsync() | |||
_context.ReportApplicationError(connectionAbortedException); | |||
|
|||
// Have to abort the connection because we can't finish draining the request | |||
_context.StopProcessingNextRequest(); | |||
_context.StopProcessingNextRequest(ConnectionEndReason.AbortedByApplication); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels like we might want to distinguish between "the application requested an abort" and "the application failed and we had to abort".
} | ||
catch (Exception ex) | ||
{ | ||
Log.LogWarning(0, ex, CoreStrings.RequestProcessingEndError); | ||
error = ex; | ||
errorCode = Http2ErrorCode.INTERNAL_ERROR; | ||
reason = ConnectionEndReason.UnexpectedError; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe "unknown error"? It seems weird to describe it as "unexpected".
_log.Http2ConnectionError(_connectionId, connectionError); | ||
await WriteGoAwayAsync(int.MaxValue, Http2ErrorCode.FLOW_CONTROL_ERROR); | ||
|
||
// Prevent Abort() from writing an INTERNAL_ERROR GOAWAY frame after our FLOW_CONTROL_ERROR. | ||
Complete(); | ||
// Stop processing any more requests and immediately close the connection. | ||
_http2Connection.Abort(new ConnectionAbortedException(CoreStrings.Http2ErrorWindowUpdateSizeInvalid, connectionError)); | ||
_http2Connection.Abort(new ConnectionAbortedException(CoreStrings.Http2ErrorWindowUpdateSizeInvalid, connectionError), Http2ErrorCode.FLOW_CONTROL_ERROR, ConnectionEndReason.WindowUpdateSizeInvalid); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will anything bad happen if the ConnectionEndReason
gets out of sync with connectionError
?
@@ -530,8 +544,14 @@ private void UpdateStreamTimeouts(long timestamp) | |||
await outboundControlStreamTask; | |||
} | |||
|
|||
// Use graceful close reason if it has been set. | |||
if (reason == ConnectionEndReason.Unknown && _gracefulCloseReason != ConnectionEndReason.Unknown) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I don't think && _gracefulCloseReason != ConnectionEndReason.Unknown
actually accomplishes anything.
With the notable disclaimer that I haven't actually monitored the health of a real service... I think I'd want to have it for all connections. Sure, the majority will be graceful shutdowns (hopefully), but it's (I assume) easy enough to filter out a few uninteresting values and which values are uninteresting seems (as mentioned above) very subjective. Another reason to include things that don't seem like errors is for making attack signatures. Something that's "normal" for a single connection may be problematic when it happens on many connections simultaneously. Or it may not be problematic, but still indicate that a change that signals a problem will soon occur. Personally, I think I'd have just end_reason and drop error_type altogether (with the caveat that we might want a special way to represent unhandled exceptions). If we go the other way, we have all sorts of strange cases to consider like whether cutting off a slow-reading client is an error. Arguably, that's the system functioning as expected and desired, but it's still interesting. |
@samsp-msft I'm not sure I'm following this line of reasoning. Given that logs are structured and queryable, why would anything not be logs-only? When would you use metrics instead? |
Looks like this PR hasn't been active for some time and the codebase could have been changed in the meantime. |
Fixes #53358
This PR improves the kestrel.connection.duration metric by adding information about why the connection ended.
Tags added or changed:
http.connection.protocol_code
- Standards based, publicly communicated code for the connection ending. Often sent to the client. The value is either the HTTP/2 error code or HTTP/3 error code that the connection ended with. Doesn't apply to HTTP/1.1.kestrel.connection.end_reason
- A more detailed internal reason about why a connection ended. Always present.error.type
- If the end reason is an error, e.g. connection killed because of timeout, excessive stream resets, transport ended while there are active HTTP requests, invalid frame, etc.error.type
could be the same as the end reason. The value could also be an exception type name if an unhandled exception is thrown from connection middleware pipeline. The first error recorded wins. Only present if there is an error.We need to decide what the new tags is. Is it the end reason or error reason? For example, a connection can be ended by the client sending a go away frame. Do we want to track that? It's not an error to close a connection like that, so in this case we're are tracking all end reasons. On the other hand, we might want to only track unexpected and unhealthy reasons that end the connection. The benefit of only tracking errors is it is very easy to get just the unhealthy connections by filtering metrics to ones that have this tag. However, I think we can achieve this by also putting error reasons into the
error.type
tag.tldr
kestrel.connection.end_reason
is always present and provides the reason,error.type
is present if the end reason is considered an error and has the same value askestrel.connection.end_reason
.Questions for folks:
kestrel.connection.end_reason
and just haveerror.type
.Tasks:
cc @noahfalk @tarekgh @lmolkova @samsp-msft