Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for arbitrary linear combination gradient recipes #909
Add support for arbitrary linear combination gradient recipes #909
Changes from 7 commits
6246431
8f4f5d3
19d5ad9
a77d73f
7eeb013
50db4f9
e166e5c
e34c034
b966cd9
ef78263
0a0dbdf
95d73f7
538bec5
b1b4334
8b30ce5
8e201b1
6f08762
9741a6b
573795a
9489dcd
a92decf
9d7096a
34debc9
7680d25
1dc4234
db02026
32de6d2
aa7d6ea
b0916a0
5970497
19783eb
3d155c5
9c425b4
debf057
1bac591
ba443c0
456b563
4c00dc1
c5ca866
59454bf
30b3314
e8c849c
a3a821d
85881e3
4a03b07
834801e
b0989c2
a9a8122
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apart from this block here, it's a shame we can't re-use this method for the tape-mode gradient 🙁
Do you think it makes sense to do the following?
This way:
We can call this method inside both the old QNode gradient methods, and the new tape gradient methods.
We can pass a different default shift value, for the qubit parameter-shift rule
The branch that worries about variables only is called in non-tape mode. Note that the above is safer than using
qml.tape_active()
, which only checks if the user has activated tape mode, not if the object itself is a tape.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Josh, thanks so much (forgot to reply here 😅). Worked great, adjusted the method!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think I need to understand what
r_
is doing here 🤔There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
np.r_
is a really nice way of doing efficient row-wise concatenation using slices :)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's essentially a slightly more efficient way of doing
np.concatenate([args, np.array([a * args[idx] + shift])])
, avoiding the need to create intermediate arraysThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh nice! My follow up thought was why this is a use case for doing that 🤔
E.g.
params = [1, 2, 3]
idx = 1, shift = 0.5
This means that
shift_p = [1, 2, 3, 2.5]
? I guess I was expecting[1, 2.5, 3]
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the gradient rules in non-tape mode are written a bit strangely 😆 Before shifting a parameter, a new 'temp' parameter is added to the circuit. It is this parameter that is shifted; it is then deleted once the gradient has been computed.
The tape mode implementation is a lot 'cleaner' imho
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the corresponding logic in
QubitQNode._pd_analytic
:There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I mainly kept the implementation as was before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The workaround proposed by numpy/numpy#7225 (comment).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 I'm guessing this is because
c
has a dtype that is causing casting issues?Would it be better to simply cast/ensure that
c
andshifted
are the correct dtype?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(same here)