Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ios 15 now support PIP, did u tried with this project? #11

Open
fukemy opened this issue Nov 26, 2021 · 18 comments
Open

ios 15 now support PIP, did u tried with this project? #11

fukemy opened this issue Nov 26, 2021 · 18 comments

Comments

@fukemy
Copy link

fukemy commented Nov 26, 2021

hi, i read the doc about PIP for video call, but i can not done it alone, i feel so hard to me, so I want to ask did you want to make PIP mode for videocall?
https://developer.apple.com/documentation/avkit/adopting_picture_in_picture_for_video_calls?changes=_1

@Meonardo
Copy link
Owner

https://developer.apple.com/documentation/avkit/adopting_picture_in_picture_for_video_calls?changes=_1
this is good news, I will try to do it this weekend.

@fukemy
Copy link
Author

fukemy commented Nov 26, 2021

thanks bro, im stuck in converting buffer into AVSampleBufferDisplayLayer, waiting for your help now

@fukemy
Copy link
Author

fukemy commented Nov 26, 2021

that's not good point: You need register your app with Apple to get Multitasking's certificate that using for PIP Call, that's so bad. You can see the doc here

https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_developer_avfoundation_multitasking-camera-access
https://developer.apple.com/contact/request/multitasking-camera-access/

@Meonardo
Copy link
Owner

Yes, I saw those requirements, I did make a request earlier, wait for Apple to approve...

@fukemy
Copy link
Author

fukemy commented Nov 26, 2021

ok, hope you get this PIP feature in demo <3

@Meonardo
Copy link
Owner

@fukemy Hi, recently I have a urgent task to finish, so I will try my best to do this after my task is done.

@fukemy
Copy link
Author

fukemy commented Nov 29, 2021

dont worry, we have time, i can wait, if you have any ideal u can post here

@Derad6709
Copy link

Any updates?

@fukemy
Copy link
Author

fukemy commented Mar 14, 2022

hi, i have new ideal based on PiPKit
He using view captured to render pip mode, can u check this support to us?
Im tried to using this but it's return black screen, but i think may it work?

@Meonardo
Copy link
Owner

sorry for no response for a long time @fukemy can you try the latest main brach?

@fukemy
Copy link
Author

fukemy commented Mar 16, 2022

hi, welcome back @Meonardo, could u provide me what's new in lasted main branch?
Because your current version that i used is work normally, So I think i will check with Janus server

@Meonardo
Copy link
Owner

I add the PiP mode support, but it only work when the videoroom configures h.264 codec, I am not sure what problem is, still working on it...

@fukemy
Copy link
Author

fukemy commented Mar 16, 2022

ok, i can convert codec to VP8 from IOS to test, i will try if i have free times, thanks
If you want to check VP8 with me, you can see this code below:

protocol ScreenSampleCapturerDelegate: AnyObject {
    func didCaptureVideo(sampleBuffer: CMSampleBuffer)
}

enum VideoRotation: Int {
    case _0 = 0
    case _90 = 90
    case _180 = 180
    case _270 = 270
}


open class ScreenSampleCapturer: RTCVideoCapturer, ScreenSampleCapturerDelegate {
    var kDownScaledFrameWidth = 360
    var kDownScaledFrameHeight = 960
    
    override init(delegate: RTCVideoCapturerDelegate) {
        super.init(delegate: delegate)
        let width = Int(UIScreen.main.bounds.width)
        let height = Int(UIScreen.main.bounds.height)
        kDownScaledFrameHeight = height * kDownScaledFrameWidth / width
        print("kDownScaledFrameWidth: \(kDownScaledFrameWidth) - kDownScaledFrameHeight: \(kDownScaledFrameHeight)")
    }
    
    func didCaptureVideo(sampleBuffer: CMSampleBuffer) {
        if sampleBuffer.numSamples != 1 || !sampleBuffer.isValid || !CMSampleBufferDataIsReady(sampleBuffer) {
            return
        }
        guard let pixelBuffer = sampleBuffer.imageBuffer else { return }
        var videoOrientation = VideoRotation._0
        guard let orientationAttachment = CMGetAttachment(sampleBuffer, key: RPVideoSampleOrientationKey as CFString, attachmentModeOut: nil) as? NSNumber else { return }
        let orientation: CGImagePropertyOrientation = CGImagePropertyOrientation(rawValue: orientationAttachment.uint32Value) ?? .up
        switch (orientation) {
        case .up, .upMirrored, .down, .downMirrored:
            videoOrientation = ._0
            break
        case .leftMirrored:
            videoOrientation = ._90
        case .left:
            videoOrientation = ._90
        case .rightMirrored:
            videoOrientation = ._270
        case .right:
            videoOrientation = ._270
        }
        let rotation = RTCVideoRotation(rawValue: videoOrientation.rawValue) ?? ._0


        var outPixelBuffer: CVPixelBuffer? = nil
        CVPixelBufferLockBaseAddress(pixelBuffer, [])

        let pixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer)

        if (pixelFormat != kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
            print("Extension assumes the incoming frames are of type NV12")
            return
        }

        let status = CVPixelBufferCreate(kCFAllocatorDefault,
                                         Int(kDownScaledFrameWidth),
                                         Int(kDownScaledFrameHeight),
                                         pixelFormat,
                                         nil,
                                         &outPixelBuffer)
        if (status != kCVReturnSuccess) {
            print("Failed to create pixel buffer")
            return
        }

        CVPixelBufferLockBaseAddress(outPixelBuffer!, [])

        // Prepare source pointers.
        var sourceImageY = vImage_Buffer(data: CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0),
                                         height: vImagePixelCount(CVPixelBufferGetHeightOfPlane(pixelBuffer, 0)),
                                         width: vImagePixelCount(CVPixelBufferGetWidthOfPlane(pixelBuffer, 0)),
                                         rowBytes: CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0))

        var sourceImageUV = vImage_Buffer(data: CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1),
                                          height: vImagePixelCount(CVPixelBufferGetHeightOfPlane(pixelBuffer, 1)),
                                          width: vImagePixelCount(CVPixelBufferGetWidthOfPlane(pixelBuffer, 1)),
                                          rowBytes: CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1))

        // Prepare out pointers.
        var outImageY = vImage_Buffer(data: CVPixelBufferGetBaseAddressOfPlane(outPixelBuffer!, 0),
                                      height: vImagePixelCount(CVPixelBufferGetHeightOfPlane(outPixelBuffer!, 0)),
                                      width: vImagePixelCount(CVPixelBufferGetWidthOfPlane(outPixelBuffer!, 0)),
                                      rowBytes: CVPixelBufferGetBytesPerRowOfPlane(outPixelBuffer!, 0))

        var outImageUV = vImage_Buffer(data: CVPixelBufferGetBaseAddressOfPlane(outPixelBuffer!, 1),
                                       height: vImagePixelCount(CVPixelBufferGetHeightOfPlane(outPixelBuffer!, 1)),
                                       width: vImagePixelCount( CVPixelBufferGetWidthOfPlane(outPixelBuffer!, 1)),
                                       rowBytes: CVPixelBufferGetBytesPerRowOfPlane(outPixelBuffer!, 1))


        var error = vImageScale_Planar8(&sourceImageY,
                                        &outImageY,
                                        nil,
                                        vImage_Flags(0))
        if (error != kvImageNoError) {
            print("Failed to down scale")
            return
        }

        error = vImageScale_CbCr8(&sourceImageUV,
                                  &outImageUV,
                                  nil,
                                  vImage_Flags(0));
        if (error != kvImageNoError) {
            print("Failed to down scale")
            return
        }
        CVPixelBufferUnlockBaseAddress(outPixelBuffer!, [])
        CVPixelBufferUnlockBaseAddress(pixelBuffer, [])
        if outPixelBuffer == nil { return }
        let timeStamp = Int64(CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) * 1000000000)
        let rtcPixelBuffer = RTCCVPixelBuffer(pixelBuffer: outPixelBuffer!)
        let videoFrame = RTCVideoFrame(buffer: rtcPixelBuffer, rotation: rotation, timeStampNs: timeStamp)
        delegate?.capturer(self, didCapture: videoFrame)
    }
}

@fukemy
Copy link
Author

fukemy commented Mar 16, 2022

kDownScaledFrameWidth and kDownScaledFrameHeight were calculated based on MEMORY problem( special to VP8)

@Meonardo
Copy link
Owner

I didnt mean the screen capturer, Its about render the pixel buff in AVSampleBufferDisplayLayer .
I can get RTCCVPixelBuffer directly from WebRTC when the codec is H.264, but when I change the codec to VP8 I get RTCI420Buffer, so there are some convert tasks to do.

@fukemy
Copy link
Author

fukemy commented Mar 16, 2022

hi Meonardo, i just found big problem. The camera stopped right after app go background, Im so sad, but I think this current way can not work, because we can not do anything when camera stopped capture

About information: When app still active then PIP active, all work good(below picture is the "Hi" action for you)
IMG_0983

But when app go to background the camera stopped, may be it's the WebRTC rule
IMG_0984

@fukemy
Copy link
Author

fukemy commented Mar 16, 2022

I think the problem is not have multitasking camera access profile from Apple.
https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_developer_avfoundation_multitasking-camera-access

May I will tried to register again

@Meonardo
Copy link
Owner

Hi back!
Yes, you're right! I have not got the profile yet, so the camera capture session will pause when the app go to background.
Usually, we want the remote peer to show in the PiP window only just like making a FaceTime call.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants