Skip to content

SeLU activation range? #1287

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
jmitrevs opened this issue May 1, 2025 · 0 comments · May be fixed by #1296
Open

SeLU activation range? #1287

jmitrevs opened this issue May 1, 2025 · 0 comments · May be fixed by #1296
Labels

Comments

@jmitrevs
Copy link
Contributor

jmitrevs commented May 1, 2025

Quick summary

The SeLU activation uses (typename data_T::value_type)1.0507009873554804934193349852946. This is dangerous because there are no guarantees that the data type has a range covering 1.05, or if there sufficient precision for the value to be meaningful. What if the data range is -0.25 to 0.25 or so? The dimensions should be decoupled. One possibility is to just hardcode, say ap_ufixed<16, 1, AP_RND> or something.

@jmitrevs jmitrevs added the bug label May 1, 2025
valerioedu added a commit to valerioedu/hls4ml that referenced this issue May 11, 2025
…arning#1287)

* Replaced literal cast 1.050700987… → `res_T` with a
  `static const ap_fixed<16,6> lambda`, preserving range and
  ~1.5 × 10⁻² LSB precision regardless of user-chosen `res_T`.
* Removed redundant datareg scope; now a single per-element branch:
      if (x ≥ 0)  y = λ · x
      else        y = selu_table[idx]   (with index clamped to [0,N-1]).
* Guard against negative-index underflow; behaviour for <0 inputs unchanged.
* Keeps identical latency/area on Vivado 2023.1 & Vitis 2024.1, but
  eliminates silent overflow/rounding when `res_T` is narrow (e.g., ap_fixed<8,2>).

Fixes fastmachinelearning#1287
valerioedu added a commit to valerioedu/hls4ml that referenced this issue May 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant