Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training speed is very slow!!! #62

Open
xuzhou666 opened this issue Jan 13, 2024 · 1 comment
Open

Training speed is very slow!!! #62

xuzhou666 opened this issue Jan 13, 2024 · 1 comment

Comments

@xuzhou666
Copy link

Readme:Every algorithm can be trained within 30 seconds, even without GPU?it's False
image
The two places marked in the picture stopped for a long time, and dqn training did not end for more than an hour.

@mokizzz
Copy link

mokizzz commented Feb 21, 2024

It seems like the long time cost at lines 93-102 of dqn.py is due to the gameplay process using the well-trained Qnet.

minimalRL/dqn.py

Lines 93 to 102 in c8efed8

while not done:
a = q.sample_action(torch.from_numpy(s).float(), epsilon)
s_prime, r, done, truncated, info = env.step(a)
done_mask = 0.0 if done else 1.0
memory.put((s,a,r/100.0,s_prime, done_mask))
s = s_prime
score += r
if done:
break

This longer duration is not a problem with the training process itself, but rather a result of the Qnet's ability to successfully play the game for extended periods without failing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants