Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The process cannot end automatically and will print logs indefinitely #285

Closed
Supergirlzjl opened this issue Dec 28, 2024 · 7 comments
Closed

Comments

@Supergirlzjl
Copy link

We use the latest version 2.1.2 of memtier_benchmark. When using the --rate-limiting parameter, we find that the test occasionally fails to end normally. The log will be printed infinitely, and the process cannot be terminated normally and the data statistics cannot be output. Can you help me take a look at this problem?
The command is: nohup memtier_benchmark -s 10.71..80 -p 6379 -a 1234 --cluster-mode --print-percentiles 50,90,95,99,100 --random-data --randomize --distinct-client-seed --hide-histogram --key-minimum 1 --key-maximum 25000000 --key-prefix="type_set_preset_" --command="sadd key data" --command-ratio=1 --command-key-pattern=P -n 500000 -c 1 -t 50 -d 1024 --rate-limiting=200 > /root/ /logs/sadd-1.log 2>&1 &

[RUN #1 100%, 2537 secs] 50 threads:    24970323 ops,   10016 (avg:    9842) ops/sec, 10.41MB/sec (avg: 10.23MB/sec),  0.77 (avg:  1.73) msec latency
[RUN #1 100%, 2537 secs] 50 threads:    24972723 ops,   10016 (avg:    9842) ops/sec, 10.41MB/sec (avg: 10.23MB/sec),  0.60 (avg:  1.73) msec latency
[RUN #1 100%, 2537 secs] 50 threads:    24975123 ops,    9994 (avg:    9842) ops/sec, 10.39MB/sec (avg: 10.23MB/sec),  0.58 (avg:  1.73) msec latency
[RUN #1 100%, 2537 secs] 50 threads:    24977523 ops,    9994 (avg:    9842) ops/sec, 10.39MB/sec (avg: 10.23MB/sec),  0.66 (avg:  1.73) msec latency
[RUN #1 100%, 2538 secs] 50 threads:    24979923 ops,    9984 (avg:    9842) ops/sec, 10.37MB/sec (avg: 10.23MB/sec),  0.62 (avg:  1.73) msec latency
[RUN #1 100%, 2538 secs] 50 threads:    24982323 ops,   10005 (avg:    9842) ops/sec, 10.40MB/sec (avg: 10.23MB/sec),  0.58 (avg:  1.73) msec latency
[RUN #1 100%, 2538 secs] 50 threads:    24984499 ops,   10059 (avg:    9842) ops/sec, 10.45MB/sec (avg: 10.23MB/sec),  0.61 (avg:  1.73) msec latency
[RUN #1 100%, 2538 secs] 50 threads:    24986267 ops,    9994 (avg:    9842) ops/sec, 10.39MB/sec (avg: 10.23MB/sec),  0.69 (avg:  1.73) msec latency
[RUN #1 100%, 2538 secs] 50 threads:    24987670 ops,    9982 (avg:    9842) ops/sec, 10.37MB/sec (avg: 10.23MB/sec),  0.65 (avg:  1.73) msec latency
[RUN #1 100%, 2538 secs] 50 threads:    24989070 ops,    9997 (avg:    9842) ops/sec, 10.39MB/sec (avg: 10.23MB/sec),  0.54 (avg:  1.73) msec latency
[RUN #1 100%, 2539 secs] 50 threads:    24990470 ops,   10034 (avg:    9842) ops/sec, 10.43MB/sec (avg: 10.23MB/sec),  0.54 (avg:  1.73) msec latency
[RUN #1 100%, 2539 secs] 50 threads:    24991870 ops,    9997 (avg:    9842) ops/sec, 10.39MB/sec (avg: 10.23MB/sec),  0.45 (avg:  1.73) msec latency
[RUN #1 100%, 2539 secs] 50 threads:    24993270 ops,   10016 (avg:    9842) ops/sec, 10.41MB/sec (avg: 10.23MB/sec),  0.46 (avg:  1.73) msec latency
[RUN #1 100%, 2539 secs] 50 threads:    24994624 ops,    9669 (avg:    9842) ops/sec, 10.05MB/sec (avg: 10.23MB/sec),  0.82 (avg:  1.73) msec latency
[RUN #1 100%, 2539 secs] 50 threads:    24996024 ops,    9979 (avg:    9842) ops/sec, 10.37MB/sec (avg: 10.23MB/sec),  0.58 (avg:  1.73) msec latency
[RUN #1 100%, 2539 secs] 50 threads:    24997424 ops,    9979 (avg:    9842) ops/sec, 10.37MB/sec (avg: 10.23MB/sec),  0.73 (avg:  1.73) msec latency
[RUN #1 100%, 2539 secs] 50 threads:    24998770 ops,   10130 (avg:    9842) ops/sec, 10.53MB/sec (avg: 10.23MB/sec),  0.58 (avg:  1.73) msec latency
[RUN #1 100%, 2539 secs] 50 threads:    24999678 ops,    9963 (avg:    9842) ops/sec, 10.36MB/sec (avg: 10.23MB/sec),  0.51 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
[RUN #1 100%, 2540 secs] 50 threads:    25000000 ops,   10143 (avg:    9842) ops/sec, 10.54MB/sec (avg: 10.23MB/sec),  0.28 (avg:  1.73) msec latency
@Supergirlzjl
Copy link
Author

This question is similar to #284, but the final result cannot be counted here, and the process cannot be terminated normally.

@Supergirlzjl
Copy link
Author

nohup memtier_benchmark -s 10.71.**.80 -p 6379 -a 1234** --cluster-mode --print-percentiles 50,90,95,99,100 --random-data --randomize --distinct-client-seed --hide-histogram --key-minimum 1 --key-maximum 25000000 --key-prefix="type_set_preset_" --command="sadd __key__ __data__" --command-ratio=1 --command-key-pattern=P -n 500000 -c 1 -t 50 -d 1024 --rate-limiting=200  > /root/ /logs/sadd-1.log 2>&1 &

@Supergirlzjl Supergirlzjl changed the title When using the latest version 2.1.2 of memtier_benchmark and using the --rate-limiting parameter, it is found that the test occasionally fails to end normally. The log will be printed infinitely, and the process cannot be terminated normally and the data statistics cannot be output. The process cannot end automatically and will print logs indefinitely Dec 30, 2024
@YaacovHazan
Copy link
Collaborator

Hi, @Supergirlzjl. Does the test not stop and continue to print the same line?

@Supergirlzjl
Copy link
Author

Hi, @Supergirlzjl. Does the test not stop and continue to print the same line?

yes

@YaacovHazan
Copy link
Collaborator

@Supergirlzjl thanks, I think I found something. Do you remember if, in this case, the IP / Port you provided is for one of the slaves in the cluster?

@Supergirlzjl
Copy link
Author

@Supergirlzjl thanks, I think I found something. Do you remember if, in this case, the IP / Port you provided is for one of the slaves in the cluster?

yes,I checked and it is indeed connected to the slave node

YaacovHazan added a commit to YaacovHazan/memtier_benchmark that referenced this issue Jan 5, 2025
When a connection disconnected, the timer event was not
free, and cause the test to keep running forever.

One of these cases is when we are starting the benchmark in
cluster-mode and using the replica's ip/port.
YaacovHazan added a commit that referenced this issue Jan 5, 2025
When a connection disconnected, the timer event was not
free, and cause the test to keep running forever.

One of these cases is when we are starting the benchmark in
cluster-mode and using the replica's ip/port.
@YaacovHazan
Copy link
Collaborator

Fixed in #286

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants