You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've dug quite a lot through the the code in the QT app (js, py), libopenshot (C++) and libopenshot-audio to see whether I could also improve the generation of the audio samples in their reduced form.
One thing I wanted to address is the samples_per_second constant, that is stored in two places (py and js).
I would have that constant transmitted together with the data in the JSON.
Also, the waveform data is based on Max-Values (see libopenshot-audio: AuidoSampleBuffer/getMagnitude)
That library also provides RMS values (Mean) which aren't exposed by the libopenshot Frame.
I learned that it is pretty useful to see visually both max peaks and average values, therefore I would propose to also expose this in libopenshot and rewrite the get_waveform python to provide both max and average values
While doing all this, I would also suggest to consistently use waveform_data as a name and not audio_data to make obvious this has a reduced sample rate.
One of the most confusing features imo is that the audio waveform contains the volumes animated on the clip; but only if they exist while the rendering is triggered.
All together, I would suggest to change the waveform JSON data to be changed to something like this:
Which also isn't influenced by the volume values.
They should instead be rendered separately eg. as a red line on top of the waveform.
I wasn't able to implement these changes because of the dependency on the library and because I do not have a development setup right now (literally just edited the files in the installation folder).
Maybe someone of you guys can implement these changes (they should be pretty simple if you have a dev setup)
Other considerations
For speed improvements it also would make a lot of sense in my opinion, to move the generation of the waveform data completely to C++.
It should be integrated with the Clip class and also be cached, I think
I've created another issue for this on the libopenshot-repo: OpenShot/libopenshot#156
The text was updated successfully, but these errors were encountered:
[incomplete feature]
About generation: when changing the volume, there is no immediate change to the waveform height, however when right-click > Display > Show waveform (on an already waveform clip), the wave is updated to the corresponding height.
This issue exists to track the suggestions made in the comments of #2029;
Especially This comment:
I've dug quite a lot through the the code in the QT app (js, py), libopenshot (C++) and libopenshot-audio to see whether I could also improve the generation of the audio samples in their reduced form.
One thing I wanted to address is the
samples_per_second
constant, that is stored in two places (py and js).I would have that constant transmitted together with the data in the JSON.
Also, the waveform data is based on Max-Values (see libopenshot-audio: AuidoSampleBuffer/getMagnitude)
That library also provides RMS values (Mean) which aren't exposed by the libopenshot Frame.
I learned that it is pretty useful to see visually both max peaks and average values, therefore I would propose to also expose this in libopenshot and rewrite the get_waveform python to provide both max and average values
While doing all this, I would also suggest to consistently use
waveform_data
as a name and notaudio_data
to make obvious this has a reduced sample rate.One of the most confusing features imo is that the audio waveform contains the volumes animated on the clip; but only if they exist while the rendering is triggered.
All together, I would suggest to change the waveform JSON data to be changed to something like this:
Which also isn't influenced by the volume values.
They should instead be rendered separately eg. as a red line on top of the waveform.
I wasn't able to implement these changes because of the dependency on the library and because I do not have a development setup right now (literally just edited the files in the installation folder).
Maybe someone of you guys can implement these changes (they should be pretty simple if you have a dev setup)
Other considerations
For speed improvements it also would make a lot of sense in my opinion, to move the generation of the waveform data completely to C++.
It should be integrated with the Clip class and also be cached, I think
I've created another issue for this on the libopenshot-repo:
OpenShot/libopenshot#156
The text was updated successfully, but these errors were encountered: