You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm iterating over clustering parameters to try to find the best settings for my data.
To that end, I have a pipeline that goes : Clustering (HS2) -> Sorting Extractor -> Remove units with <10 spikes -> Sorting Analyzer -> Quality Metrics
Under some clustering conditions, this can result in sorting objects with no units left. This then results in a crash when computing the first extension, in this case random spikes :
Traceback (most recent call last):
File "/home/user/Documents/ephy/test_clustering.py", line 80, in <module>
analyzer.compute("random_spikes",
File "/home/user/Documents/GitHub/spikeinterface/src/spikeinterface/core/sortinganalyzer.py", line 1324, in compute
return self.compute_one_extension(extension_name=input, save=save, verbose=verbose, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/GitHub/spikeinterface/src/spikeinterface/core/sortinganalyzer.py", line 1405, in compute_one_extension
extension_instance.run(save=save, verbose=verbose)
File "/home/user/Documents/GitHub/spikeinterface/src/spikeinterface/core/sortinganalyzer.py", line 2170, in run
self._run(**kwargs)
File "/home/user/Documents/GitHub/spikeinterface/src/spikeinterface/core/analyzer_extension_core.py", line 58, in _run
self.data["random_spikes_indices"] = random_spikes_selection(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/GitHub/spikeinterface/src/spikeinterface/core/sorting_tools.py", line 223, in random_spikes_selection
random_spikes_indices = np.concatenate(random_spikes_indices)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: need at least one array to concatenate
While I understand this might be far from the typical use case, wouldn't it make sense to check for the presence of at least one unit in the create method of the sorting analyzer ?
I can draft a small PR if needed.
The text was updated successfully, but these errors were encountered:
I think this makes sense, but I guess we could also just say this is the user's responsibility to check their sorting themselves before using it. I think this is a @alejoe91 and @samuelgarcia level decision.
I'm iterating over clustering parameters to try to find the best settings for my data.
To that end, I have a pipeline that goes : Clustering (HS2) -> Sorting Extractor -> Remove units with <10 spikes -> Sorting Analyzer -> Quality Metrics
Under some clustering conditions, this can result in sorting objects with no units left. This then results in a crash when computing the first extension, in this case random spikes :
While I understand this might be far from the typical use case, wouldn't it make sense to check for the presence of at least one unit in the
create
method of the sorting analyzer ?I can draft a small PR if needed.
The text was updated successfully, but these errors were encountered: