Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runtime warning when using ConjugateGradientOptimizer #72

Open
lchenat opened this issue Feb 6, 2017 · 1 comment
Open

Runtime warning when using ConjugateGradientOptimizer #72

lchenat opened this issue Feb 6, 2017 · 1 comment

Comments

@lchenat
Copy link
Contributor

lchenat commented Feb 6, 2017

I try to use ConjugateGradientOptimizer in my own algorithm, but I often get this error:

RuntimeWarning: invalid value encountered in sqrt
2.0 * self._max_constraint_val * (1. / (descent_direction.dot(Hx(descent_direction)) + 1e-8))

After getting this error, all the training statistics will become nan. Is there a good way to resolve this?

@dementrock
Copy link
Member

Check if the value is negative (it shouldn't be), which might be due to numerical issues.

jonashen pushed a commit to jonashen/rllab that referenced this issue May 29, 2018
Add variable scope to symbolic operations using TensorFlow

A variable scope facilitates the reading of a TensorFlow graph by
grouping tensor objects in hierarchies.

In rllab, the hierarchies are defined with primitive objects and
symbolic operations, where the latter is a member function of a
primitive.

In this change, the primitives are the algorithms, networks,
distributions, optimizers, policies, Q-functions and regressors.

An example of a primitive is the DiagonalGaussian class, which
implements a probability distribution, and the symbolic function is
kl_sym, which implements the Kullback–Leibler divergence. The idea of
implementing the variable scope is that all the tensor operations in
kl_sym are encapsulated within the hierarchy DiagonalGaussian/kl_sym.

Each primitive and symbolic function have the parameter "name", which
has a default value equal to the primitive or symbolic function name,
but developers can set those parameters as they may find more
convenient, so the previous example could be changed to be:
distribution/divergence.

A context class was added to the file tensor_utils.py to verify if the
variable scope of the corresponding primitive is already set, in order
to set it in case it's not, and remove it once the symbolic function is
exiting.

The only caveat with variable scopes is that they work based on the call
stack and not with a class or file scope, so even if the same scope is
used in two different call stacks, TensorFlow adds an index to the
primitive or symbolic function names to make each scope unique.
Therefore, not all symbolic operations performed in a primitive will
appear under the same primitive scope, but they will be split in
different instances of the primitive scope (e.g. DiagonalGaussian and
DiagonalGaussian_1).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants