Skip to content
Discussion options

You must be logged in to vote
  1. If you only need single-threaded execution, you could define forward with &mut self and mutate internal fields directly. (e.g. self.tensors = other_computed_tensor or self.tensors = self.tensors.slice_assign(slice, partial_update). But this won't work correctly in multi-threaded or multi-device training, where updates need to be thread-safe. For a general thread-safe approach, Burn has RunningState as used in BatchNorm.
    /// A state that can be updated during the forward pass while being thread safe.
    ///
    /// # Note
    ///
    /// The state value is the average of all updates on all threads.

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@laggui
Comment options

@qcynaut
Comment options

Answer selected by qcynaut
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants