Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the gradient of the action-value with respect to actions #7

Open
Joywanglulu opened this issue Dec 4, 2018 · 1 comment
Open

the gradient of the action-value with respect to actions #7

Joywanglulu opened this issue Dec 4, 2018 · 1 comment

Comments

@Joywanglulu
Copy link

Hi, I'm not sure if it would calculate the gradient of the action-value with respect to actions?

policy_loss = -self.critic([
to_tensor(state_batch),
self.actor(to_tensor(state_batch))
])

@zhihanyang2022
Copy link

I think the answer is yes.

policy_loss = -self.critic([
            to_tensor(state_batch),
            self.actor(to_tensor(state_batch))
        ])

policy_loss = policy_loss.mean()
policy_loss.backward()
self.actor_optim.step()

First of all, I think it is clear that we are doing a gradient step using the actor's optimizer. I guess the question is more like: "can we propagate gradients to a previous network?" The answer to this is also yes, please refer to: https://discuss.pytorch.org/t/backprop-through-weights-of-a-second-network/52573/4.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants