You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our data is typically collected over >10 hours of recording, meaning that we need to separate spike sorting for the same tissue/cells across several sorting chunks. This makes it difficult to find the same cells across different sorting chunks/runs. Ideally, I would like to sort one chunk and then use the spike templates, etc from that sorting run on all of the subsequent chunks. This would save us a lot of extra work. My understanding is that currently the only way to use custom templates is to replace the wTEMP.npz file in the git repository. It would be ideal to be able to point to a directory/file containing the first run (e.g., by passing an argument to spike_detect.get_waves -> template_path). Related, it would be helpful to have some documentation about how this is done and the required format for the wTEMP.npz file. Perhaps I missed it. Thank you!
Additional Context
No response
The text was updated successfully, but these errors were encountered:
I'd be interested in something like this as well, for a different use case. I am working with recordings that have a lot of electrical stimulation artifact, which despite my best efforts to remove via preprocessing, still seem to cause problems during sorting. Ideally, I'd like to be able to identify templates based on snippets of the recording where I know there are no artifacts, and then apply them to the entire dataset.
I'm curious, @jacobpennington and @marius10p, if you think this approach would be feasible. If I understand the algorithm right, the templates are used to detect candidate spike waveforms, which are then passed to the clustering algorithm for the final cluster assignment. Are the templates themselves used as an input to the clustering algorithm? Or, more to the point, is the algorithm deterministic in that candidate waveforms from different files, identified with the same templates, would produce equivalent clusters?
Feature you'd like to see:
Our data is typically collected over >10 hours of recording, meaning that we need to separate spike sorting for the same tissue/cells across several sorting chunks. This makes it difficult to find the same cells across different sorting chunks/runs. Ideally, I would like to sort one chunk and then use the spike templates, etc from that sorting run on all of the subsequent chunks. This would save us a lot of extra work. My understanding is that currently the only way to use custom templates is to replace the
wTEMP.npz
file in the git repository. It would be ideal to be able to point to a directory/file containing the first run (e.g., by passing an argument tospike_detect.get_waves -> template_path
). Related, it would be helpful to have some documentation about how this is done and the required format for thewTEMP.npz
file. Perhaps I missed it. Thank you!Additional Context
No response
The text was updated successfully, but these errors were encountered: