Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes for floating point multiplier #22

Open
lord-tarun opened this issue Apr 12, 2024 · 1 comment
Open

Changes for floating point multiplier #22

lord-tarun opened this issue Apr 12, 2024 · 1 comment

Comments

@lord-tarun
Copy link

Dear authors, Thank you for making your idea open source. I looked through the issues raised earlier and could not find similar points raised.
I was going through the repository and saw that we tested the results for 8-bit multipliers. However, for my work, I require 32-bit floating point multipliers. As per my understanding, two changes will be needed. Could you please let me know what more changes might need to be made? Thanks in advance!

  1. Removing quantization-related functions in FakeApproxConv2D (present in tf2/python/keras/layers/fake_approx_convolutional.py) to allow floating point multipliers.
  2. Creating the binary file for the multiplier by changing the range from 256 (2^8) with the floating point range (2^32), as shown below.
FILE * f = fopen("output.bin", "wb");

for(unsigned int a = 0; a < (2^32); a++)
    for(unsigned int b = 0; b < (2^32); b++) {
      long val = approximate_mult(a, b); // replace by your own function call
      fwrite(&val, sizeof(uint16_t), 1, f);
    }

fclose(f);
@xiaoxixideluoke
Copy link

Hello, do you have the full file of this emulator, the "https://ehw.fit.vutbr.cz/tf-approximate/tf-approximate-gpu.sif" website is invalid and the file cannot be found, can you please share it?Thanks a lot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants