-
-
Notifications
You must be signed in to change notification settings - Fork 12.6k
Add many audio sources (including voice) #5870
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Can there also be an option to capture no sound? When using multiple virtual display windows and playing audio it usually plays on all windows currently with no way disable it except through the OS sound settings. |
https://github.com/Genymobile/scrcpy/blob/master/doc/audio.md#no-audio |
Should be fixed by commit Please review/test/check. |
|
ref So because VOICE_UPLINK restricts 3rd party apps, microphone source cannot be passed from computer to phone during calls? |
|
I tested the changes from this PR using a private fork and built the project by using the GitHub Action. For my testing scenario, I received a WhatsApp call from a second phone. I tried both the --audio-source=voice-call-downlink option and voice-call-uplink and in both cases, the audio was transferred regardless of which phone was muted. Additionally, with the regular --audio-source=playback option, the audio is no longer played back on the device. Is it possible to extend this behavior to voice calls as well? I am using a Pixel 8 Pro (Android 15) and the Windows Client |
👍 Thank you for the test. Fixed: diff --git a/server/src/main/java/com/genymobile/scrcpy/audio/AudioSource.java b/server/src/main/java/com/genymobile/scrcpy/audio/AudioSource.java
index 6689611ad..d16b5e387 100644
--- a/server/src/main/java/com/genymobile/scrcpy/audio/AudioSource.java
+++ b/server/src/main/java/com/genymobile/scrcpy/audio/AudioSource.java
@@ -13,8 +13,8 @@ public enum AudioSource {
MIC_VOICE_RECOGNITION("mic-voice-recognition", MediaRecorder.AudioSource.VOICE_RECOGNITION),
MIC_VOICE_COMMUNICATION("mic-voice-communication", MediaRecorder.AudioSource.VOICE_COMMUNICATION),
VOICE_CALL("voice-call", MediaRecorder.AudioSource.VOICE_CALL),
- VOICE_CALL_UPLINK("voice-call-uplink", MediaRecorder.AudioSource.VOICE_CALL),
- VOICE_CALL_DOWNLINK("voice-call-downlink", MediaRecorder.AudioSource.VOICE_CALL),
+ VOICE_CALL_UPLINK("voice-call-uplink", MediaRecorder.AudioSource.VOICE_UPLINK),
+ VOICE_CALL_DOWNLINK("voice-call-downlink", MediaRecorder.AudioSource.VOICE_DOWNLINK),
VOICE_PERFORMANCE("voice-performance", MediaRecorder.AudioSource.VOICE_PERFORMANCE);
private final String name;
The playback audio source uses a specific API, where we can request to duplicate audio or not ( |
|
Thanks for the quick response! 👌🏼 My idea was to use scrcpy to transfer both game audio and voice chat from Call of Duty Mobile to my PC for streaming with OBS. While everything works fine for the most part, I’m encountering an issue with voice call audio. When headphones are connected directly to the phone, the game sound and voice chat are bundled together. However, since I’m using the headphones on my PC, the audio streams remain separated on the phone. Do you have any suggestions for this use case? Unfortunately, a capture card isn’t an option as it reduces the refresh rate from 120 Hz to 60 Hz. If it’s more convenient, we could discuss this privately to avoid cluttering the PR comment section. |
|
@rom1v Hello, I have a phone in country A with a local sim card. I have a pc in country B. I can already use scrcpy from PC-B to phone-A passing through PC-A adb server. What I need to do is: be able to make phone calls. From what I'm reading here, I should be able to receive the audio of the call using voice-call-downlink. Anyway, how can I use the voice-call-downlink? I'm on a macbook, I use homebrew to install scrcpy. So, I couldn't hear what the person in the call is saying through voice-call-downlink and at the same moment make the phone record from the mic what the PC-A is outputting from his speakers? Thanks very much. |
See discussions in #3880. |
@rom1v what about getting the voice downlink while using phone microphone? Is it possible? If you don't get what I mean please just read again my question above, you answered the part about sending audio to the microphone, but not the other part. Thanks |
I don't know, I just expose the audio sources from the Android API. How the Android implementation behaves with and without mic is to be tested by device, I have no control over that. |
In fact, there are still several problems. Firstly, the resulting audio stream is broken in VLC and Firefox (it works "fine" with warnings in mpv). Secondly, the "fixed" PTS is not correct, because we push blocks of 960 samples, but the opus encoder outputs blocks of 1024 samples, so after 960 samples, it waits for the next 960 samples before producing an output packet… so fixing the PTS on the output side adds noise in the timestamps. I don't know how to record a correct file while allowing to compensate for clock drift (so that the video and audio remains synchronized) or handle "missing" silence packets. |
Okay okay, but how can I use the voice downlink source? Can you ship it to homebrew so I can install from there? Or even as a release here on github, or a pre-release. Or am I forced to build from source? I need it on macos arm64 |
I actually tested both voice-downlink and voice-uplink with a WhatsApp call. However, I noticed that both audio sources seemed to include both up- and downlink audio. This could be due to WhatsApp potentially using the Android API differently than other applications (this is just my hypothesis - perhaps someone can confirm?). You'll need to test different scenarios as this feature is still under development. I've created a fork that includes these changes. It's publicly available on my GitHub and built with the GitHub Action provided by this repo :) |
I would need to use it for normal phone calls using Android 14 or 15 phone. |
Maybe Bluetooth over TCP/IP is a solution for you |
@yNEX any idea/example? |
I couldn't find any direct solution online for this. You might have to do some research. I don't know if USB over IP solutions like VirtualHere could help you. This could make a USB Bluetooth Adapter available over the Internet like it is locally attached to another PC. Just an idea, don't know if it works |
|
Here is a more detailed explanation of the problems it causes. Any insights are appreciated. ContextOn the device, audio is captured using Audio "duration" can mean two different things:
For example, if audio is captured at 48kHz, in theory, we should get 48000 samples per second (by definition). And for each block read, we retrieve the PTS (presentation timestamp) from the system clock (possibly via The difference between accuracy and precision is illustrated by this image: Concretely, in this case, imprecision means that every read of n samples does not correspond exactly to a PTS difference of n / samplerate. For example, when reading blocks of 960 samples (20ms), the PTS difference between each block might be 20.124, 20.789, 19.756, 21.112, 19.024… but on average, it's 20ms. Inaccuracy, on the other hand, is related to clock drift: we cannot expect the audio clock to be absolutely exact. So in practice, 48kHz might mean that 48000 samples are produced every 1.003 seconds on average for example, or equivalently, that ~47856 samples are produced every second (not exactly 48000). In addition to these issues, a temporary lag might cause a device to produce only 47000 samples during a given second instead of 48000. And (related to this PR), some audio sources do not produce any sample when disabled (i.e. 0 samples per second for a given period of time). For real-time playback, these varations are compensated by the audio regulator, which does not even use PTS at all. The problem is for recording. RecordingTo record, the scrcpy client directly muxes the packets encoded on the device into an MP4 or MKV container (it does not reencode). For each media packet, two pieces of information are needed: the encoded data and the PTS. But as we've seen before, the number of samples and the PTS do not exactly match. That's the core of the issue. I don't know if containers (MKV and MP4) require them to match (I have already read somewhere that it is the case). We could make them match by adjusting the PTS (btw this is what the OPUS encoder does out of our control): But after some time, the audio will be significantely out-of-sync with the video. Ideally, for the scrcpy use case, I would like to write data and the original PTS as is (this is what the scrcpy client currently does), and the player would play them correctly with compensation. For small differences, it seems that it's more or less the case (not sure the behavior is absolutely correct though), but if there are "holes" (when recording voice when voice is disabled), it just does not work correctly, with any player. So what should we do?
|
|
look, I cannot understand what you're saying, but thank you for your effort ! imagine you're from germany, you have a phone in germany, you go to australia, you need to use the phone that you left in germany with a specific german number, how would you do it? |
|
I tested on my Redmi K70 Pro running Android 15, and all of I think we need to find out why most devices only produce about one quarter samples compare to "real time", and what those samples are (maybe they have a different format, but incorrectly interpreted?) |
👍 (the device I tested was Pixel 8)
What do you mean, "one quarter samples"? |
I logged near
On Mi 11 running Android 13, each |
Oh, ok. In fact, it's real time, but a 80ms block is produced every 80ms (I had noticed 40ms on my Pixel 8). So if you read several times, you should get the remaining samples immediately. For example:
Is it the case on your device? |
|
Hi @rom1v |
Voice call audio is captured on your Android device, transmitted to your laptop, and played on your laptop. Not sure if this is what you're asking, but this is only one-way: the microphone of your laptop is not captured to be forwarded to your Android device. |
|
I expect that I can use the speaker and microphone of the laptop to answer a call (it should be two-way). I tried some tests but I could not hear the voice call audio come from laptop speaker. |
It is not. What has been added by this PR is the possibility to capture many audio sources from the Android device, including voice calls. Forwarding the computer microphone to the Android device is not implemented. See #3880.
You mean from the laptop microphone I guess. |
No, I meant the voice call audio had been captured from Android device was expected to be heard from laptop speaker. |
Yes, it should (if you specify Maybe what you mean is that you want to capture multiple audio sources at once (at least device audio output + voice calls). This is not supported. |
I had tried this option but it did not work in my test (no audio from laptop speaker) |
|
You're testing while passing a phone call, right? What is the full output in the console? (run |
YES
Below is the full output. |
|
It appears that an audio stream is correctly transmitted to the client. Maybe that's a stream full of silence (for some reason on your device). If you mirror the device audio output (i.e. without |
The voice audio is only heard via phone speaker
Yes, I can get the device output sound from laptop speaker. |
|
@rom1v I tested every type of voice call source and they all send both uplink and downlink (without differentiation). Also, you should add virtual microphone as a guy did with some custom code.. and then if you can mix voice downlink/uplink with phone audio in general and have the microphone feature, the phone would be 100% usable remotely to make calls etc |
Only enable them if SC_AUDIO_REGULATOR_DEBUG is set, as they may spam the output. PR Genymobile#5870 <Genymobile#5870>
Report the number of silence samples inserted due to underflow every second, along with the other metrics. PR Genymobile#5870 <Genymobile#5870>
The default OPUS and FLAC encoders on Android rewrite the input PTS so that they exactly match the number of samples. As a consequence: - audio clock drift is not compensated - implicit silences (without packets) are ignored To work around this behavior, generate new PTS based on the current time (after encoding) and the packet duration. PR Genymobile#5870 <Genymobile#5870>
The audio regulator assumed a continuous audio stream. But some audio sources (like the "voice call" audio source) do not produce any packets on silence, breaking this assumption. Use PTS to detect such discontinuities. PR Genymobile#5870 <Genymobile#5870>
Store the target audio source integer (one of the constants from android.media.MediaRecorder.AudioSource) in the AudioSource enum (or -1 if not relevant). This will simplify adding new audio sources. PR Genymobile#5870 <Genymobile#5870>
Expose more audio sources from MediaRecorder.AudioSource. Refs <https://developer.android.com/reference/android/media/MediaRecorder.AudioSource> Fixes Genymobile#5412 <Genymobile#5412> Fixes Genymobile#5670 <Genymobile#5670> PR Genymobile#5870 <Genymobile#5870>
|
Is it possible to have an app inject audio into a phone call or transfer to a voice bucket, perfect for telemarketers etc being put on hold, muzak, and told how their call is SO important etc. Or to turn it around on organizations "This call will be recorded for quality assurance (etc)" or "No consent is given to record this and copyright is claimed on any voice performance with a license fee of $10,000 you agree to and bind your employer to by continuation with this call". Time to fight back against big companies. |
|
I'd be be happy to pay good coin for such an implementation |
Not possible via an Android app AFAIK. Via scrcpy, see #6439. |
|
Hi there
Is it possible on Graphene OS ? What is the exact blockage and functions needed ?
…On Saturday, November 8th, 2025 at 8:07 AM, Romain Vimont ***@***.***> wrote:
rom1v left a comment [(Genymobile/scrcpy#5870)](#5870 (comment))
> Is it possible to have an app inject audio
Not possible via an Android app AFAIK. Via scrcpy, see [#6439](#6439).
—
Reply to this email directly, [view it on GitHub](#5870 (comment)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/BZ3255Z6RDERU4IVJ3SBEHL33YIMJAVCNFSM6AAAAACLP7SX62VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTKMBWGY3TMMRYGY).
You are receiving this because you commented.Message ID: ***@***.***>
|
I think it should be added in the OS codebase . Probably the best you can do is only ask grapheneos on github to add this feature |

The existing audio sources were:
output(default): forwards the whole audio output, and disables playback on the device (mapped toREMOTE_SUBMIX).playback: captures the audio playback (Android apps can opt-out, so the whole output is not necessarily captured).mic: captures the microphone (mapped toMIC).This PR adds:
mic-unprocessed: captures the microphone unprocessed (raw) sound (mapped toUNPROCESSED).mic-camcorder: captures the microphone tuned for video recording, with the same orientation as the camera if available (mapped toCAMCORDER).mic-voice-recognition: captures the microphone tuned for voice recognition (mapped toVOICE_RECOGNITION).mic-voice-communication: captures the microphone tuned for voice communications (it will for instance take advantage of echo cancellation or automatic gain control if available) (mapped toVOICE_COMMUNICATION).voice-call: captures voice call (mapped toVOICE_CALL).voice-call-uplink: captures voice call uplink only (mapped toVOICE_UPLINK).voice-call-downlink: captures voice call downlink only (mapped toVOICE_DOWNLINK).voice-performance: captures audio meant to be processed for live performance (karaoke), includes both the microphone and the device playback (mapped toVOICE_PERFORMANCE).Discontinuities
The existing audio sources always produce a continuous audio stream. A major issue is that some new audio sources (like the "voice call" source) do not produce packets on silence (they only capture during a voice call).
The audio regulator (the component responsible to maintain a constant latency) assumed that the input audio stream was continuous. In this PR, it now detects discontinuities based on the input PTS (and adjusts its behavior). This only works correctly if the input PTS are "correct".
Another major problem is that, even if the capture timestamps are correct, some encoders (OPUS) rewrite the PTS based on the number of samples (ignoring the input PTS). As a consequence, when encoding in OPUS, the timings are broken: they represent a continuous audio stream where the silences are removed. This breaks the discontinuity detection in the audio regulator (we could work around the problem by relying on the current recv date, since the real time playback itself does not depend on PTS). But the most important problem is that it breaks recording timings. For example:
If the voice call does not start immediately, the audio will not be played at the correct date.
With the AAC encoder, it works (the encoder on the device does not rewrite the PTS based only on the number of samples):
This PR is in draft due to this unsolved issue.
Aims to fix #5670 and #5412.