Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using uint8 as token type to reduce memory usage #872

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

aisk
Copy link
Contributor

@aisk aisk commented Dec 15, 2020

No description provided.

@codecov
Copy link

codecov bot commented Dec 15, 2020

Codecov Report

Merging #872 (a15a35c) into master (557bbca) will decrease coverage by 0.01%.
The diff coverage is 50.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #872      +/-   ##
==========================================
- Coverage   80.90%   80.89%   -0.02%     
==========================================
  Files          54       54              
  Lines        6788     6789       +1     
==========================================
  Hits         5492     5492              
- Misses       1069     1070       +1     
  Partials      227      227              
Impacted Files Coverage Δ
compiler/token/token.go 91.66% <0.00%> (-8.34%) ⬇️
compiler/parser/flow_control_parsing.go 82.22% <33.33%> (ø)
compiler/parser/expression_parsing.go 68.52% <100.00%> (ø)
native/ripper/ripper.go 70.37% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 557bbca...a15a35c. Read the comment docs.

@hachi8833 hachi8833 requested a review from st0012 December 16, 2020 02:05
@st0012
Copy link
Member

st0012 commented Dec 20, 2020

@aisk thanks for the PR! do you have any numbers to show the amount of memory it reduces though?

asking because our CI does a comparison benchmarking between the PR branch & master branch at the end of each build, like this. and this PR's builds don't show an obvious difference as far as I can see:

benchmark                              old allocs     new allocs     delta
BenchmarkBasicMath/add-2               41             41             +0.00%
BenchmarkBasicMath/subtract-2          41             41             +0.00%
BenchmarkBasicMath/multiply-2          41             41             +0.00%
BenchmarkBasicMath/divide-2            41             41             +0.00%
BenchmarkConcurrency/concurrency-2     68552          68500          -0.08%
BenchmarkContextSwitch/fib-2           72309          72309          +0.00%
BenchmarkContextSwitch/quicksort-2     47935          47935          +0.00%

benchmark                              old bytes     new bytes     delta
BenchmarkBasicMath/add-2               1736          1736          +0.00%
BenchmarkBasicMath/subtract-2          1736          1736          +0.00%
BenchmarkBasicMath/multiply-2          1736          1736          +0.00%
BenchmarkBasicMath/divide-2            1736          1736          +0.00%
BenchmarkConcurrency/concurrency-2     3412003       3402804       -0.27%
BenchmarkContextSwitch/fib-2           2997256       2997264       +0.00%
BenchmarkContextSwitch/quicksort-2     2184213       2184209       -0.00%

I'm not sure if our default benchmarking cases are the right ones for this change though. so perhaps you can show some statistics as well? 😄

@aisk
Copy link
Contributor Author

aisk commented Dec 27, 2020

I did not run any bench mark and think thie optimise is obvious, but to my surprise, I and test it and found the memory reduce is so minimum. I think maybe go have some string intern stuff?

@st0012
Copy link
Member

st0012 commented Feb 7, 2021

@aisk I think one reason could be the tokenizer is only called once when running a program. so we may need to see it take effect in a large codebase. while I do appreciate your attempt to optimize Goby's performance, I think at this stage such optimization is not necessary, especially when it could increase the complexity of the code 🙂

(also sorry for the late reply, I've been quite busy working on multiple projects recently 🙏)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants