Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Proposal] only positive reward #1076

Closed
fan-ziqi opened this issue Sep 30, 2024 · 2 comments · May be fixed by #1813
Closed

[Proposal] only positive reward #1076

fan-ziqi opened this issue Sep 30, 2024 · 2 comments · May be fixed by #1813

Comments

@fan-ziqi
Copy link
Contributor

Proposal

Users can selectively calculate only positive rewards in reward manager

@mpgussert
Copy link
Collaborator

Users have full control over the reward function when defining their own environment. It's not clear to me what you are asking for here. Do you want a toggle to invert the rewards on the stand alone samples?

@fan-ziqi
Copy link
Contributor Author

fan-ziqi commented Oct 20, 2024

Thank you for reply! @mpgussert

I mean to say, the compute_reward function in legged_gym uses only_positive_rewards to control whether the reward is reduced to 0. Do you plan to introduce this feature in reward_manager as well?

    def compute_reward(self):
        """ Compute rewards
            Calls each reward function which had a non-zero scale (processed in self._prepare_reward_function())
            adds each terms to the episode sums and to the total reward
        """
        self.rew_buf[:] = 0.
        for i in range(len(self.reward_functions)):
            name = self.reward_names[i]
            rew = self.reward_functions[i]() * self.reward_scales[name]
            self.rew_buf += rew
            self.episode_sums[name] += rew
        if self.cfg.rewards.only_positive_rewards:
            self.rew_buf[:] = torch.clip(self.rew_buf[:], min=0.)
        # add termination reward after clipping
        if "termination" in self.reward_scales:
            rew = self._reward_termination() * self.reward_scales["termination"]
            self.rew_buf += rew
            self.episode_sums["termination"] += rew

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants