You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Continuing of the discussion in #434 (comment). We may want to allow multiple implementations of the same function. For example, we may want to make it possible to use opt-einsum to cache contraction paths without this being the default behaviour.
Here is an example with a possible API (without any accompanying implementation).
@allow_alternative_implementationsdefabsolute_value(x):
returnabs(x)
@absolute_value.alternative_implementation(name="sqrt_based")def_sqrt_based_abs(x):
returnsqrt(x*x)
absolute_value(x) # use original implementationabsolute_value.use_alternative("sqrt_based")
absolute_value(x) # use sqrt_basedabsolute_value.use_alternative(None)
absolute_value(x) # use original implementation again
The text was updated successfully, but these errors were encountered:
Maybe we could call it @dynamically_dispatched and then have options to dispatch. We could even use a unified system to do backend dispatching, tensor algebra backend dispatching and this.
Continuing of the discussion in #434 (comment). We may want to allow multiple implementations of the same function. For example, we may want to make it possible to use opt-einsum to cache contraction paths without this being the default behaviour.
Here is an example with a possible API (without any accompanying implementation).
The text was updated successfully, but these errors were encountered: