Skip to content

"full buffer" messages on console #22

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mjmac opened this issue Apr 2, 2019 · 7 comments · Fixed by pion/transport#35
Closed

"full buffer" messages on console #22

mjmac opened this issue Apr 2, 2019 · 7 comments · Fixed by pion/transport#35

Comments

@mjmac
Copy link

mjmac commented Apr 2, 2019

Your environment.

  • Version: v1.1.2/b2f6a7fc
  • Other Information:

I added some extra logging to help figure out why this message was appearing:

--- a/vendor/github.com/pions/transport/packetio/buffer.go
+++ b/vendor/github.com/pions/transport/packetio/buffer.go
@@ -1,9 +1,10 @@
 package packetio

 import (
-       "errors"
        "io"
        "sync"
+
+       "github.com/pkg/errors"
 )

 // ErrFull is returned when the buffer has hit the configured limits.
@@ -57,7 +58,7 @@ func (b *Buffer) Write(packet []byte) (n int, err error) {
        // Check if there is available capacity
        if b.limitSize != 0 && b.size+len(packet) > b.limitSize {
                b.mutex.Unlock()
-               return 0, ErrFull
+               return 0, errors.Wrap(ErrFull, "Buffer capacity reached in Write()")
        }
--- a/vendor/github.com/pions/srtp/session.go
+++ b/vendor/github.com/pions/srtp/session.go
@@ -5,6 +5,8 @@ import (
        "io"
        "net"
        "sync"
+
+       "github.com/pkg/errors"
 )

 type streamSession interface {
@@ -92,6 +94,10 @@ func (s *session) close() error {
        return nil
 }

+type stackTracer interface {
+       StackTrace() errors.StackTrace
+}
+
 func (s *session) start(localMasterKey, localMasterSalt, remoteMasterKey, remoteMasterSalt []byte, profile ProtectionProfile, child streamSession) error {
        var err error
        s.localContext, err = CreateContext(localMasterKey, localMasterSalt, profile)
@@ -127,6 +133,11 @@ func (s *session) start(localMasterKey, localMasterSalt, remoteMasterKey, remote

                        if err = child.decrypt(b[:i]); err != nil {
                                fmt.Println(err)
+                               if err, ok := err.(stackTracer); ok {
+                                       for _, f := range err.StackTrace() {
+                                               fmt.Printf("%+s:%d\n", f, f)
+                                       }
+                               }
                        }
                }
        }()

This gave me the following stack trace on the console (one of zillions, once they started appearing):

Buffer capacity reached in Write(): full buffer
savant-go/vendor/github.com/pions/transport/packetio.(*Buffer).Write
	/Users/mike.macdonald/go/src/savant-go/vendor/github.com/pions/transport/packetio/buffer.go:61
savant-go/vendor/github.com/pions/srtp.(*ReadStreamSRTCP).write
	/Users/mike.macdonald/go/src/savant-go/vendor/github.com/pions/srtp/stream_srtcp.go:28
savant-go/vendor/github.com/pions/srtp.(*SessionSRTCP).decrypt
	/Users/mike.macdonald/go/src/savant-go/vendor/github.com/pions/srtp/session_srtcp.go:129
savant-go/vendor/github.com/pions/srtp.(*session).start.func1
	/Users/mike.macdonald/go/src/savant-go/vendor/github.com/pions/srtp/session.go:134
runtime.goexit
	/usr/local/Cellar/go/1.11.5/libexec/src/runtime/asm_amd64.s:1333

What did you do?

Testing multiple video streams. Have not noticed this issue before, but I added a new high-bandwidth stream source (2-4Mbps) and this seemed to be enough to put it over the edge.

Once I started seeing the errors in the console log, I also observed that the video streams became choppy, dropping from 30fps down to single-digits and in some cases freezing altogether until restarted.

What did you expect?

That not to happen.

What happened?

It happened.

@mjmac
Copy link
Author

mjmac commented Apr 2, 2019

I'm not familiar with this area of the project... It seems like it has something to do with RTCP reports, based on the stack trace. Could it be that we just need to increase the buffer based on throughput?

@mjmac
Copy link
Author

mjmac commented Apr 2, 2019

After some discussion in the channel (thanks @Sean-Der!), I realize that the full buffer message may correlate with the stream performance issues I was observing, but it almost certainly doesn't imply causality. In other words, this isn't a bug.

I'll leave it open to track replacing those fmt.Print* messages with proper logging, though...

@mjmac
Copy link
Author

mjmac commented Apr 2, 2019

If it is really optional for a library user to handle those RTCP messages, then we probably shouldn't log at Warning. Maybe log.Notice("RTCP receive buffer full. This is not an error if your code doesn't care about RTCP messages."), and that is guarded by a boolean which is flipped after the first log message to avoid spamming the console?

@Sean-Der
Copy link
Member

Sean-Der commented Apr 2, 2019

100% agree with that, should be informational only!

If you drop a PR changing that happy to approve right away!

@wdouglass
Copy link
Member

FWIW, i'm having this same issue, including the slowdown (which, i agree with @mjmac, does not imply causation)

@Sean-Der
Copy link
Member

Sean-Der commented Apr 9, 2019

That sucks :/

@wdouglass do you know who/where is printing this message? Should we extend pion/logging to print file+line

Can you also try bumping up https://github.com/pion/srtp/blob/master/stream_srtp.go#L12 and https://github.com/pion/srtp/blob/master/stream_srtcp.go#L12 to see if that helps?

@wdouglass
Copy link
Member

I actually fixed the issue locally a few hours ago (i was holding a lock in my RTCP handler for too long)
I'm going to go ahead and agree with your comment above -- a more descriptive message then 'buffer full' would be super useful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging a pull request may close this issue.

3 participants