Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to set/get prior kappa image from STIR-python (matlab) #662

Open
robbietuk opened this issue Aug 20, 2020 · 4 comments
Open

Unable to set/get prior kappa image from STIR-python (matlab) #662

robbietuk opened this issue Aug 20, 2020 · 4 comments

Comments

@robbietuk
Copy link
Collaborator

Performing image reconstruction using STIR-python. After one iteration of OSEM, I would like to use the latest image as the kappa image for the prior. This should work something like this:

recon_obj.get_objective_function().get_prior().set_kappa_sptr(OSEM_image)
recon_obj.get_objective_function().get_prior().get_kappa_sptr()

The question is, can this be added to GeneralisedPrior.h and therefore exposed, or is there a prior that does not support the kappa methodology? It seems to work with PLS, RDP, QP, (log-cosh)

@robbietuk
Copy link
Collaborator Author

robbietuk commented Aug 20, 2020

Apolgies, it is possible to create a new a prior with the kappa image using

import stir
RPD = stir.RelativeDifferencePrior3DFloat()
RPD.set_penalisation_factor(0.00001)
RPD.set_kappa_sptr(OSEM_image)
RPD.set_up(OSEM_image)
recon_obj.get_objective_function().set_prior_sptr(RPD)

Yet it remains impossible to change gamma and kappa_sptr from recon_obj.get_objective_function().get_prior()

@KrisThielemans
Copy link
Collaborator

it would certainly be possible to have a voxel-dependent weight for all priors. The original "kappa" is a specific way to do that that only applies to cases where the prior is a sum of pair-wise priors i.e.

prior =  sum_ij kappa_i kappa_j R_ij(lambda_i, lambda_v)

with usually

R_ij(lambda_i, lambda_v) = w_ij R(lambda_i, lambda_v)

with usually w_ij chosen as a function of the distance between the 2 voxels (so not location dependent).

This wouldn't be applicable/appropriate to all priors I guess (especially non-local-means type of things). In that case, you'd have something like

prior =  sum_i kappa_i^2 R_i(lambda_i; lambda)

(taking all of the image). This is what we assumed in Tsai's paper.

Enabling this would be part of a long-overdue clean-up of the GeneralisedPrior hierarchy.

@KrisThielemans
Copy link
Collaborator

Yet it remains impossible to change gamma and kappa_sptr from recon_obj.get_objective_function().get_prior()

yes, this is because currently we cannot downcast in Python. We have a similar problem for the objective functions, but added a ToPoisson... function by hand

STIR/src/swig/stir.i

Lines 1601 to 1607 in aae4c92

%inline %{
template <class T>
stir::PoissonLogLikelihoodWithLinearModelForMeanAndProjData<T> *
ToPoissonLogLikelihoodWithLinearModelForMeanAndProjData(stir::GeneralisedObjectiveFunction<T> *b) {
return dynamic_cast<stir::PoissonLogLikelihoodWithLinearModelForMeanAndProjData<T>*>(b);
}
%}

Downcasting via SWIG should be possible though #284

@robbietuk
Copy link
Collaborator Author

it would certainly be possible to have a voxel-dependent weight for all priors. The original "kappa" is a specific way to do that that only applies to cases where the prior is a sum of pair-wise priors i.e.

prior =  sum_ij kappa_i kappa_j R_ij(lambda_i, lambda_v)

with usually

R_ij(lambda_i, lambda_v) = w_ij R(lambda_i, lambda_v)

with usually w_ij chosen as a function of the distance between the 2 voxels (so not location dependent).

This wouldn't be applicable/appropriate to all priors I guess (especially non-local-means type of things). In that case, you'd have something like

prior =  sum_i kappa_i^2 R_i(lambda_i; lambda)

(taking all of the image). This is what we assumed in Tsai's paper.

Enabling this would be part of a long-overdue clean-up of the GeneralisedPrior hierarchy.

To some extent this implies that we should not move the relevant methods into GeneralisedPrior right now as the kappa implementation may not be applicable to all priors. I think my temporary fix works by resetting the prior.

I agree, this will need to be further thought through when the clean up happens.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants