Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

procedural connectivity for static connections #51

Open
chanokin opened this issue May 12, 2020 · 30 comments
Open

procedural connectivity for static connections #51

chanokin opened this issue May 12, 2020 · 30 comments
Labels
enhancement New feature or request

Comments

@chanokin
Copy link
Collaborator

Hi,

Would it be (easily) possible to expose the procedural generation of synapses to PyNN GeNN?

@chanokin chanokin added the enhancement New feature or request label May 12, 2020
@neworderofjamie
Copy link
Contributor

neworderofjamie commented May 12, 2020

It would be quite a bit of work, but it would definitely be possible (we were hoping to get a Google Summer of Code student to implement this stuff this summer but sadly it was not to be). The basic strategy would be to:

  1. Add a random.py to PyNN GeNN and define a native RNG class to signify to PyNN that you want to use the on-GPU RNG - this would build suitable GeNN variable initialization snippets (like these) from the PyNN ones
  2. Add a connectors.py to PyNN GeNN and add GeNN sparse connectivity initialization code (like these)
  3. If you are using the native RNG don't build connectivity/expand variables in Python, instead use the PyGeNN syntax to initialize variables and connectivity

This would massively improve the PyNN interface as it would enable both procedural connectivity and on-GPU initialization which is a big win for rapidly iterating on big models. I don't think I really have time to do this but I'd be more than happy to help.

I seem to remember you played a bit with my PyNN SpiNNaker implementation - that did very similar stuff.

@neworderofjamie
Copy link
Contributor

(this is something of a duplicate of #4 and #5)

@chanokin
Copy link
Collaborator Author

chanokin commented May 18, 2020

If I understand correctly:

  1. Is generating GeNN/C++ (the string portions of the examples) code for distribution functions so that these can be executed on-GPU? Or do we just need to 'point' to already existing ones? If the former, would one need to convert from Uniform to Other distributions? Or use native CUDA generators where available? Never mind, you handle this in GeNN right? This would be the duplicate of Random variable initialization on device #4 ?

  2. Similar to 1, generate the strings needed for each connector? This would be the duplicate of Sparse connectivity initialisation on device #5 ?

  3. Bypass current behaviour for the on-GPU where available

I did port your SpiNNaker generators to the 'official' toolchain. I will give this a go but don't have a local GPU so it's probably going to be slow progress.

@neworderofjamie
Copy link
Contributor

The code strings get turned into CUDA, C++ for CPU or (soon) OpenCL so the $(gennrandXX) calls and stuff do indeed get turned into calls to the CUDA RNGs.

In PyNN GeNN, we previously re-implemented models (e.g. for neurons, synapes and electrodes) rather than use the ones included with GeNN so we can customize them at runtime and not have to have a code path for built in and special models - I'd probably do the same for consistancy.

@chanokin
Copy link
Collaborator Author

chanokin commented Jul 8, 2020

Lets do the procedural part :)

@neworderofjamie
Copy link
Contributor

neworderofjamie commented Jul 9, 2020

You have done all the hard bit already - all you should need to do is add another case to set matrix_type to PROCEDURAL_PROCEDURALG here. You also need to set:

syn_pop.pop.set_span_type(genn_wrapper.SynapseGroup.SpanType_PRESYNAPTIC)
syn_pop.pop.set_num_threads_per_spike(NUM_THREADS_PER_SPIKE)

where you create the actual PyGeNN populations (here and here I think). Unless your model is massive (millions of neurons) you need to tune NUM_THREADS_PER_SPIKE to get decent performance (4 or 8 on a desktop GPU and 16 on a high-end GPU typically work ok)

The only caveats are that it will only work if all projection parameters are either constant or initialized with a variable initialization snippet (hence the fact #53 and #52 were required) and there's currently no support for downloading procedural weights or connectivity so you'll need to add some more errors.

@chanokin
Copy link
Collaborator Author

chanokin commented Jul 9, 2020

The only caveats are that it will only work if all projection parameters are either constant or initialized with a variable initialization snippet (hence the fact #53 and #52 were required) and there's currently no support for downloading procedural weights or connectivity so you'll need to add some more errors.

So all other connectors could potentially be procedural? For example, an all-to-all will just store the constant weight? or the distant-dependent store indices and a constant?

@neworderofjamie
Copy link
Contributor

All-to-all is a bit of a special case as there is not 'connectivity' - but if you use DENSE_PROCEDURALG you can have e.g. an all-to-all matrix of normally distributed weights. I don't know enough about how the distance-dependent conenctors are defined, but I suspect they could be implemented in this way. https://github.com/genn-team/genn/blob/master/include/genn/genn/initSparseConnectivitySnippet.h shows how the current connectors are implemented so anything you can implement efficiently in that form really 😄 I imagine you might need to pass through arrays of pre and postsynaptic neuron coordinates or something which FixedNumberTotal demonstrates.

@chanokin
Copy link
Collaborator Author

chanokin commented Jul 9, 2020

I'm more confused now 😆, would a good test for using procedural be

if weights == constant or weights == on-device or connectivity == on-device

or something like

if (weights == constant or weights == on-device) and connectivity == on-device

@neworderofjamie
Copy link
Contributor

if (weights == constant or weights == on-device) and (connectivity == on-device or connectivity == all-to-all) maybe?

@chanokin
Copy link
Collaborator Author

First hurdle, I believe there should be something here which allows the procedural matrix type to be used but I'm not sure what
https://github.com/genn-team/genn/blob/38fa54e46f281e3bea1433283349fbe4979751bf/pygenn/genn_groups.py#L761-L765

As a side note, I can't run procedural stuff on CPU, it says it's not supported/enabled but I'm running stuff remotely in the office.

@neworderofjamie
Copy link
Contributor

neworderofjamie commented Jul 13, 2020

I think the two tests you need to add are:

@property
def has_procedural_connectivity(self):
    """Tests whether synaptic connectivity is procedural"""
    return (self.matrix_type & SynapseMatrixConnectivity_PROCEDURAL) != 0

@property
def has_procedural_weights(self):
    """Tests whether synaptic weights are procedural"""
    return (self.matrix_type & SynapseMatrixWeight_PROCEDURAL) != 0

Procedural connectivity doesn't make a lot of sense on CPU and there'd be quite a lot of pain integrating a suitable RNG so we basically didn't bother.

@chanokin
Copy link
Collaborator Author

So if it passes these two tests, we can go ahead and load as usual (except not having the pointers to the arrays)? Or is there anything else to load for the procedural case?

@neworderofjamie
Copy link
Contributor

oh shit - I wasn't really answering the question at all and I think we accidentally introduced another bug in that code. I think you need to update the outer test to:

if not self.is_dense and not self.has_procedural_connectivity and self.weight_sharing_master is None:

Then, everything should work ok

@chanokin
Copy link
Collaborator Author

chanokin commented Jul 13, 2020

Ok, I've tried this and it does run but it does not give back the expected results. I'm using a simple network:

  • a SpikeSourceArray (1 spike per neuron),
  • a OneToOne projection,
  • and a IF_curr_exp output population

In my non-procedural run I get 1 spike per output neuron but when I use the procedural approach I get 5 😮

@neworderofjamie
Copy link
Contributor

could you post the pynn model so I can have a go?

@chanokin
Copy link
Collaborator Author

Here's the code, I just looked at the generated code and I do get the appropriate threads per spike, I was looking at the cuda threads before :(

import numpy as np
import pynn_genn as sim
import copy
from pynn_genn.random import NativeRNG, NumpyRNG, RandomDistribution

np_rng = NumpyRNG(seed=1)
rng = NativeRNG(np_rng, seed=1)

timestep = 1.
sim.setup(timestep)

n_neurons = 100
params = copy.copy(sim.IF_curr_exp.default_parameters)
pre = sim.Population(n_neurons, sim.SpikeSourceArray,
                     {'spike_times': [[1 + i] for i in range(n_neurons)]},
                     label='pre')
params['tau_syn_E'] = 5.
post = sim.Population(n_neurons, sim.IF_curr_exp, params,
                      label='post')
post.record('spikes')

dist_params = {'low': 0.0, 'high': 10.0}
dist = 'uniform'
rand_dist = RandomDistribution(dist, rng=rng, **dist_params)
var = 'weight'
on_device_init = bool(1)
conn = sim.OneToOneConnector(use_procedural=bool(1))
syn = sim.StaticSynapse(weight=5, delay=1)#rand_dist)
proj = sim.Projection(pre, post, conn, synapse_type=syn)

sim.run(2 * n_neurons)
data = post.get_data()
spikes = np.asarray(data.segments[0].spiketrains)
print(spikes)
sim.end()

all_at_appr_time = 0
sum_spikes = 0
for i, times in enumerate(spikes):
    sum_spikes += len(times)
    if int(times[0]) == (i + 9):
        all_at_appr_time += 1

assert sum_spikes == n_neurons
assert all_at_appr_time == n_neurons
#each neuron spikes once because

@neworderofjamie
Copy link
Contributor

what branch should I pull to try this?

@chanokin
Copy link
Collaborator Author

Ah sorry, I forgot to put that:

  • PyNN GeNN is procedural_synapses
  • GeNN is pynn_procedural_synapses

@neworderofjamie
Copy link
Contributor

neworderofjamie commented Jul 13, 2020

So, the bug is caused by the OneToOne connector not correctly handling multiple threads per spike. I could fix it but it makes no sense to use multiple threads with one-to-one connectivity. With procedural connectivity, each thread processes a single presynaptic spike and, as there's only one synapse on each synaptic row, one of the threads will process a single synapse and the rest will sit idle.

@chanokin
Copy link
Collaborator Author

chanokin commented Jul 13, 2020

So we should set compute/select a number of threads per spike for each projection? I thought it was a global thing 😮

@neworderofjamie
Copy link
Contributor

neworderofjamie commented Jul 13, 2020

I think that might be best - tuning it depends on firing rates, connectivity, population sizes and your GPU. For the cortical models I was simulating, those were all about the same so I used a constant value but you can't really rely on that.

@chanokin
Copy link
Collaborator Author

chanokin commented Jul 13, 2020

It's working for one-to-one but for all-to-all it complains that:

terminate called after throwing an instance of 'std::runtime_error'
  what():  Cannot use procedural connectivity without specifying connectivity initialisation snippet
Aborted (core dumped)

Also working with fixed-num-post :) Is the above error something I can fix @neworderofjamie ? Now working with All-toAll :)

@chanokin
Copy link
Collaborator Author

To figure out how snippets work and make one for the full-fleged distance-dependent connector, I've made a restricted one which has distance and probability as separate dependencies (maximum distance and fixed probability)

class MaxDistanceFixedProbabilityConnector(DistanceDependentProbabilityConnector):
__doc__ = DistanceDependentProbabilityConnector.__doc__
def __init__(self, max_dist, probability, allow_self_connections=True,
rng=None, safe=True, callback=None):
d_expr = "%s * ( d <= %s)"%(probability, max_dist)
DistanceDependentProbabilityConnector.__init__(
self, d_expr, allow_self_connections, rng, safe, callback)
self.probability = probability
self.max_dist = max_dist
self._builtin_name = 'MaxDistanceFixedProbability'
self.connectivity_init_possible = isinstance(rng, NativeRNG)
self._needs_populations_shapes = True
self.shapes = None
@property
def _conn_init_params(self):
params = {
'prob': self.probability,
'max_dist': self.max_dist,
}
return dict(list(params.items()) + list(self.shapes.items()))

and its associated GeNN Snippet
https://github.com/genn-team/genn/blob/ad76436c1a67f2da45ab789f2f6d0e4751caf33f/include/genn/genn/initSparseConnectivitySnippet.h#L364-L490

It seems to be generating the correct synapses but I don't know exactly what SET_CALC_MAX_ROW_LENGTH_FUNC and SET_CALC_MAX_COL_LENGTH_FUNC should return so I may be allocating too much memory for the connector. Is it the typical number of outgoing (incoming) per pre (post) neurons? Should this custom connector be added to GeNN or can this be something I have in my own PyNN GeNN custom models? Can this be done purely in Python without needing the C++ snippet?

I have some questions / steps to do the full-fleged distance-dependent Snippet

  • A 'd-expression' parser. There is a finite list of supported operations which can potentially get parsed from Python into C++. Is there a parser somewhere in GeNN to do this?
  • Calculate row length(s). Since the 'd-expression' is a 'general' one, at least we would have to evaluate for a position which can reach the most neurons to get the maximum reachable neurons but how do we take into account the probability? How about each individual row length?
  • Optimize for reachable regions of the post population.

@neworderofjamie
Copy link
Contributor

So, first of all, awesome! You can and definitely should do this in Python though - presumably iteration time is pretty painful currently....Best syntax example I can find is here. Thoughts:

  1. It's not enforced in any way but the code strings should be C rather than C++ so just replace the lambdas with #defines. CUDA and CPU work fine with C++ but OpenCL doesn't (and will be released soon!)
  2. The d-expression thing could be tricky, I feel being in Python will be helpful as I think you should be able to parse the d-expresion using sympy which I think can also generate code. Then you can build your GeNN model at run-time like the PyNN STDP models / the GIF neuron does.
  3. Because the probability changes per postsynaptic neuron I think you should probably not use the prevJ += (1 + (int)(log(u) * $(probLogRecip)) algorithm - as a first pass just loop through all the postsynaptic neurons and compare the uniform distribution to the pre vs post probability

@chanokin
Copy link
Collaborator Author

chanokin commented Aug 3, 2020

I tried moving the Max Distance & Fixed Probability connector to a Python only version and I'm getting some really weird behaviour. When I run it with PyCharm and add a debug point at the end of genn_model.init_connectivity the code works and seems to produce okeish results. But if I run without debug in PyCharm or with gdb, the thing just breaks and throws the following:

Starting program: /home/chanokin/sussex/on_device_rng/venv3/bin/python on_device_dnp_synapse_gen_test.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff565f700 (LWP 225334)]
[New Thread 0x7ffff4e5e700 (LWP 225335)]
[New Thread 0x7ffff265d700 (LWP 225336)]
[New Thread 0x7fffeb87e700 (LWP 225337)]
[New Thread 0x7fffe907d700 (LWP 225338)]
[New Thread 0x7fffe687c700 (LWP 225339)]
[Thread 0x7fffe687c700 (LWP 225339) exited]
[Thread 0x7fffe907d700 (LWP 225338) exited]
[Thread 0x7fffeb87e700 (LWP 225337) exited]
[Thread 0x7ffff265d700 (LWP 225336) exited]
[Thread 0x7ffff4e5e700 (LWP 225335) exited]
[Thread 0x7ffff565f700 (LWP 225334) exited]
[Detaching after fork from child process 225340]
[Detaching after fork from child process 225341]
[New Thread 0x7ffff265d700 (LWP 225363)]
[New Thread 0x7ffff4e5e700 (LWP 225364)]
[New Thread 0x7ffff565f700 (LWP 225365)]
[New Thread 0x7fffe687c700 (LWP 225366)]
[New Thread 0x7fffd96d4700 (LWP 225367)]
[New Thread 0x7fffd8ed3700 (LWP 225368)]

Thread 1 "python" received signal SIGSEGV, Segmentation fault.
0x00007fffd3e20eb5 in SynapseGroup::SynapseGroup(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, SynapseMatrixType, unsigned int, WeightUpdateModels::Base const*, std::vector<double, std::allocator<double> > const&, std::vector<Models::VarInit, std::allocator<Models::VarInit> > const&, std::vector<Models::VarInit, std::allocator<Models::VarInit> > const&, std::vector<Models::VarInit, std::allocator<Models::VarInit> > const&, PostsynapticModels::Base const*, std::vector<double, std::allocator<double> > const&, std::vector<Models::VarInit, std::allocator<Models::VarInit> > const&, NeuronGroupInternal*, NeuronGroupInternal*, SynapseGroupInternal const*, InitSparseConnectivitySnippet::Init const&, VarLocation, VarLocation, VarLocation, bool) () from /home/chanokin/sussex/on_device_rng/genn/pygenn/genn_wrapper/libgenn_dynamic.so
(gdb) c
Continuing.
[gariitomo:225328] *** Process received signal ***
[gariitomo:225328] Signal: Segmentation fault (11)
[gariitomo:225328] Signal code:  (128)
[gariitomo:225328] Failing at address: (nil)
[gariitomo:225328] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x46210)[0x7ffff7df6210]
[gariitomo:225328] [ 1] /home/chanokin/sussex/on_device_rng/genn/pygenn/genn_wrapper/libgenn_dynamic.so(_ZN12SynapseGroupC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE17SynapseMatrixTypejPKN18WeightUpdateModels4BaseERKSt6vectorIdSaIdEERKSD_IN6Models7VarInitESaISJ_EESN_SN_PKN18PostsynapticModels4BaseESH_SN_P19NeuronGroupInternalST_PK20SynapseGroupInternalRKN29InitSparseConnectivitySnippet4InitE11VarLocationS11_S11_b+0x5d3)[0x7fffd3e20eb5]
[gariitomo:225328] [ 2] /home/chanokin/sussex/on_device_rng/genn/pygenn/genn_wrapper/_genn_wrapper.cpython-38-x86_64-linux-gnu.so(_ZNSt8_Rb_treeINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt4pairIKS5_20SynapseGroupInternalESt10_Select1stIS9_ESt4lessIS5_ESaIS9_EE17_M_emplace_uniqueIJRKSt21piecewise_construct_tSt5tupleIJRS7_EESK_IJSL_ODnR17SynapseMatrixTypeRjRPKN18WeightUpdateModels6CustomERKSt6vectorIdSaIdEERKSW_IN6Models7VarInitESaIS12_EES16_S16_RPKN18PostsynapticModels6CustomES10_S16_RP19NeuronGroupInternalS1E_RKN29InitSparseConnectivitySnippet4InitER11VarLocationS1K_S1K_RbEEEEES6_ISt17_Rb_tree_iteratorIS9_EbEDpOT_+0x1ae)[0x7fffe27abbde]
[gariitomo:225328] [ 3] /home/chanokin/sussex/on_device_rng/genn/pygenn/genn_wrapper/_genn_wrapper.cpython-38-x86_64-linux-gnu.so(_ZN9ModelSpec20addSynapsePopulationIN18WeightUpdateModels6CustomEN18PostsynapticModels6CustomEEEP12SynapseGroupRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE17SynapseMatrixTypejSE_SE_PKT_RKNSG_11ParamValuesERKNSG_9VarValuesERKNSG_12PreVarValuesERKNSG_13PostVarValuesEPKT0_RKNSV_11ParamValuesERKNSV_9VarValuesERKN29InitSparseConnectivitySnippet4InitE+0x1c3)[0x7fffe27ac183]
[gariitomo:225328] [ 4] /home/chanokin/sussex/on_device_rng/genn/pygenn/genn_wrapper/_genn_wrapper.cpython-38-x86_64-linux-gnu.so(+0x47d34)[0x7fffe2797d34]
[gariitomo:225328] [ 5] /home/chanokin/sussex/on_device_rng/venv3/bin/python(PyCFunction_Call+0xfa)[0x5f188a]
[gariitomo:225328] [ 6] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalFrameDefault+0x62f9)[0x56d299]
[gariitomo:225328] [ 7] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalCodeWithName+0x262)[0x565972]
[gariitomo:225328] [ 8] /home/chanokin/sussex/on_device_rng/venv3/bin/python[0x50729f]
[gariitomo:225328] [ 9] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalFrameDefault+0x6ff)[0x56769f]
[gariitomo:225328] [10] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalCodeWithName+0x262)[0x565972]
[gariitomo:225328] [11] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyFunction_Vectorcall+0x3a5)[0x5f1d85]
[gariitomo:225328] [12] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalFrameDefault+0x827)[0x5677c7]
[gariitomo:225328] [13] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalCodeWithName+0x262)[0x565972]
[gariitomo:225328] [14] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyFunction_Vectorcall+0x3a5)[0x5f1d85]
[gariitomo:225328] [15] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalFrameDefault+0x827)[0x5677c7]
[gariitomo:225328] [16] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyFunction_Vectorcall+0x1ab)[0x5f1b8b]
[gariitomo:225328] [17] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalFrameDefault+0x827)[0x5677c7]
[gariitomo:225328] [18] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyFunction_Vectorcall+0x1ab)[0x5f1b8b]
[gariitomo:225328] [19] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalFrameDefault+0x827)[0x5677c7]
[gariitomo:225328] [20] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyFunction_Vectorcall+0x1ab)[0x5f1b8b]
[gariitomo:225328] [21] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalFrameDefault+0x827)[0x5677c7]
[gariitomo:225328] [22] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyFunction_Vectorcall+0x1ab)[0x5f1b8b]
[gariitomo:225328] [23] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalFrameDefault+0x827)[0x5677c7]
[gariitomo:225328] [24] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalCodeWithName+0x262)[0x565972]
[gariitomo:225328] [25] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyFunction_Vectorcall+0x3a5)[0x5f1d85]
[gariitomo:225328] [26] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalFrameDefault+0x6ff)[0x56769f]
[gariitomo:225328] [27] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalCodeWithName+0x262)[0x565972]
[gariitomo:225328] [28] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyFunction_Vectorcall+0x3a5)[0x5f1d85]
[gariitomo:225328] [29] /home/chanokin/sussex/on_device_rng/venv3/bin/python(_PyEval_EvalFrameDefault+0x54d5)[0x56c475]
[gariitomo:225328] *** End of error message ***

Thread 1 "python" received signal SIGSEGV, Segmentation fault.
0x00007fffd3e20eb5 in SynapseGroup::SynapseGroup(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, SynapseMatrixType, unsigned int, WeightUpdateModels::Base const*, std::vector<double, std::allocator<double> > const&, std::vector<Models::VarInit, std::allocator<Models::VarInit> > const&, std::vector<Models::VarInit, std::allocator<Models::VarInit> > const&, std::vector<Models::VarInit, std::allocator<Models::VarInit> > const&, PostsynapticModels::Base const*, std::vector<double, std::allocator<double> > const&, std::vector<Models::VarInit, std::allocator<Models::VarInit> > const&, NeuronGroupInternal*, NeuronGroupInternal*, SynapseGroupInternal const*, InitSparseConnectivitySnippet::Init const&, VarLocation, VarLocation, VarLocation, bool) () from /home/chanokin/sussex/on_device_rng/genn/pygenn/genn_wrapper/libgenn_dynamic.so
(gdb) c
Continuing.
Couldn't get registers: No such process.
Couldn't get registers: No such process.
(gdb) [Thread 0x7fffd8ed3700 (LWP 225368) exited]
[Thread 0x7fffd96d4700 (LWP 225367) exited]
[Thread 0x7fffe687c700 (LWP 225366) exited]
[Thread 0x7ffff565f700 (LWP 225365) exited]
[Thread 0x7ffff4e5e700 (LWP 225364) exited]
[Thread 0x7ffff265d700 (LWP 225363) exited]

Program terminated with signal SIGSEGV, Segmentation fault.

@neworderofjamie
Copy link
Contributor

Could I see your code? Very odd that attaching a debug has any effect as you're (presumably) still using a release version of GeNN

@neworderofjamie
Copy link
Contributor

neworderofjamie commented Aug 4, 2020

This is totally the same bug we fixed in genn-team/genn#331 for variable initialization - no idea why I didn't make it here as well 😟 Hopefully this branch will fix it. Also you can't use std::cout in CUDA, just good old printf!

@chanokin
Copy link
Collaborator Author

chanokin commented Aug 4, 2020

Duuude, you're a genius! Adding s_instance.__disown__() to the connectivity init solved it 😃

About the std::cout, no worries, I just use it for debugging 😅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants