scn.load and scn.load_nexus currently use float32 to repsent the event weights. These all default to 1.0 so this is fine. However, when summing events later the user may encounter a surprising precision loss.
We should carefully consider the tradeoff between risk for bugs and memory use. Reduction operations use double precision for intermediate values so many problems are avoided. Nevertheless final results can be affected significantly. For example:
import numpy as np
np.float32(75893996)
gives 75894000.