Replies: 2 comments
-
Hi @JotEff, I think if your experimental tolerance for variability in the synchrony is really low, then the way that jsPsych presents audio and html content is not going to be precise enough. Web audio playback operates on a different clock than the clock that controls visual presentations, and getting really good synchrony is hard. And it varies across browsers, so there's not even a hacky solution that I've found satisfactory. If you need to run the experiment online, my suggestion is to encode the stimuli as video clips. This is the only way that I know of to precisely align audio and visual content in a browser. This might change significantly in the future as web audio systems update (just in case someone is looking at this answer in a few years!).
Not really. Technically its possible to run jsPsych inside of a node.js environment, but I wouldn't recommend pursuing this option. It's complicated and not worth it. Hope that's helpful, and hopefully we'll be able to handle this better in a few years! |
Beta Was this translation helpful? Give feedback.
-
Hi @jodeleeuw Thank you very much for this detailed answer. I think that we will settle with your suggestion to use video clips to present our stimulus material. It really seems to be the best (and only) option that reaches a precise audio-visual synchrony. |
Beta Was this translation helpful? Give feedback.
-
Dear all,
I am currently setting up a study on language learning. The key manipulation in the experiment concerns the presentational timing of spoken and written word forms. The visual and auditory stimuli are either presented consecutively or synchronously. Importantly, for the synchronous presentation the written word needs to be presented in sync with the onset of the spoken word.
The precision of audio-visual synchrony seems to be rather poor across various online experiment platforms and seems to vary among OS and browsers (Anwyl-Irvine et al., 2020; Bridges et al., 2020).
Does anyone have up-to-date experience concerning the precision of audio-visual synchrony in jsPsych? Would you say it’s feasible at all to conduct a study with the experimental setup outlined above outside the lab?
I am also open to options that include offline applications where participants download the experiment as an executable to their devices and run it from there. Is that something that would be possible in the current version of jsPsych?
Your help would be very much appreciated!
Thanks very much,
Johanna
Beta Was this translation helpful? Give feedback.
All reactions