Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEATURE: <Use spike templates from one sorting run on all subsequent runs> #860

Open
mikemanookin opened this issue Feb 1, 2025 · 1 comment

Comments

@mikemanookin
Copy link

Feature you'd like to see:

Our data is typically collected over >10 hours of recording, meaning that we need to separate spike sorting for the same tissue/cells across several sorting chunks. This makes it difficult to find the same cells across different sorting chunks/runs. Ideally, I would like to sort one chunk and then use the spike templates, etc from that sorting run on all of the subsequent chunks. This would save us a lot of extra work. My understanding is that currently the only way to use custom templates is to replace the wTEMP.npz file in the git repository. It would be ideal to be able to point to a directory/file containing the first run (e.g., by passing an argument to spike_detect.get_waves -> template_path). Related, it would be helpful to have some documentation about how this is done and the required format for the wTEMP.npz file. Perhaps I missed it. Thank you!

Additional Context

No response

@rdarie
Copy link

rdarie commented Feb 11, 2025

I'd be interested in something like this as well, for a different use case. I am working with recordings that have a lot of electrical stimulation artifact, which despite my best efforts to remove via preprocessing, still seem to cause problems during sorting. Ideally, I'd like to be able to identify templates based on snippets of the recording where I know there are no artifacts, and then apply them to the entire dataset.

I'm curious, @jacobpennington and @marius10p, if you think this approach would be feasible. If I understand the algorithm right, the templates are used to detect candidate spike waveforms, which are then passed to the clustering algorithm for the final cluster assignment. Are the templates themselves used as an input to the clustering algorithm? Or, more to the point, is the algorithm deterministic in that candidate waveforms from different files, identified with the same templates, would produce equivalent clusters?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants