-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New backend specialised in Hamming-weight-preserving circuits #1565
base: master
Are you sure you want to change the base?
Conversation
for more information, see https://pre-commit.ci
@BrunoLiegiBastonLiegi could you help me out with setting the correct engine from the name of the platform? I didn't understand what you did for the Clifford backend. |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
This was closed by mistake, so I'm reopening it. |
@BrunoLiegiBastonLiegi @stavros11 sans a handful of docstrings that @AlejandroSopena will update today, this is ready for review. |
Co-authored-by: Renato Mello <[email protected]>
Co-authored-by: Renato Mello <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I started looking into this, but before going ahead with the review I wished to discuss about the possibility of changing the structure of the backend, as I detail in one of the comments below.
states of Hamming weight :math:`k`, the :class:`qibo.backends.hamming_weight.HammingWeightBackend` | ||
a compressed representation of quantum states that is restricted to the subspace of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a verb is missing before a compressed
here, right?
self.platform = engine | ||
if engine == "numpy": | ||
self.engine = construct_backend(self.platform) | ||
elif engine in ["numba", "cupy", "cuquantum"]: | ||
self.engine = construct_backend("qibojit", platform=self.platform) | ||
elif engine in ["tensorflow", "pytorch"]: # pragma: no cover | ||
self.engine = construct_backend("qiboml", platform=self.platform) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok so the internal engine
is the complete backend, but do you actually need the whole backend or is backend.np enough? Namely, doing self.engine = construct_backend("qibojit", platform=self.platform).np
for instance. This was something that we also changed for the clifford backend in the review stage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I double checked, the backend is used mostly for the cast
method and retrieving the gate matrices, which cannot be circumvented unfortunately. What I was wondering is whether we could dynamically inherit from the correct backend, and make this something similar to our MetaBackend
:
from qibo.backends import NumpyBackend
def some_method(self, x):
return self.np.log(x)
def create_HW_backend(backend):
hw_backend = type("HWBackend", (backend,), {"some_method": some_method,})()
hw_backend.name = "hamming_weight"
return hw_backend
b = create_HW_backend(NumpyBackend)
print(b.np) # this returns numpy
print(b.some_method(2)) # this returns np.log(2)
from qibo import gates
print(gates.x(0).matrix(b)) # this returns the numpy matrix of X
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I dug a little deeper, here some more comments. Broadly speaking I think you might benefit from the use of functools.cache
instead of manually caching (assuming the number of different combinations is indeed restricted).
|
||
return strings | ||
|
||
def _get_cached_strings( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the set of possible inputs combinations is restricted, you could even consider caching this one with functools.cache
.
|
||
def _get_lexicographical_order(self, nqubits, weight): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def _get_lexicographical_order(self, nqubits, weight): | |
@cache | |
def _get_lexicographical_order(self, nqubits, weight): |
same here
if ( | ||
key_0 not in self._dict_cached_strings_one | ||
or key_1 not in self._dict_cached_strings_one | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah okay, here happens the caching, by using the @cache
decorator directly you could probably avoid this if/else
for i, index_i in enumerate(indexes): | ||
for j, index_j in enumerate(indexes): | ||
if index_i % 2 ** ( | ||
nqubits - gate_qubits - ncontrols | ||
) == index_j % 2 ** (nqubits - gate_qubits - ncontrols): | ||
if ( | ||
strings[j][gate_qubits : gate_qubits + ncontrols].count("1") | ||
== ncontrols | ||
): | ||
matrix[i, j] = gate_matrix[ | ||
index_i // 2 ** (nqubits - gate_qubits), | ||
index_j // 2 ** (nqubits - gate_qubits), | ||
] | ||
else: | ||
matrix[i, j] = 1 if i == j else 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't you make indexes
and strings
ndarrays and avoid this quite convoluted loop using array operations?
for i, string_i in enumerate(strings): | ||
new_string_i = [""] * len(string_i) | ||
for k in range(nqubits): | ||
new_string_i[map_[k]] = string_i[k] | ||
new_string_i = "".join(new_string_i) | ||
new_index_i = strings.index(new_string_i) | ||
|
||
for j, string_j in enumerate(strings): | ||
new_string_j = [""] * len(string_j) | ||
for k in range(nqubits): | ||
new_string_j[map_[k]] = string_j[k] | ||
new_string_j = "".join(new_string_j) | ||
new_index_j = strings.index(new_string_j) | ||
|
||
new_matrix[new_index_i, new_index_j] = matrix[i, j] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similarly here, if strings is an array life becomes much easier probably
Co-authored-by: BrunoLiegiBastonLiegi <[email protected]>
Co-authored-by: BrunoLiegiBastonLiegi <[email protected]>
Run on QPU
|
Checklist: