Implementation of Dual Registration Network for 3D Volume Alignment #7476
Replies: 3 comments
-
Hi @s-shahpouri, thanks for your interest here. Thanks. |
Beta Was this translation helpful? Give feedback.
-
The Note: The affine is then converted to a dense displacement field ddf to warp the moving image to the fixed space. You can get the affine (excluding the trivial last row) like this: ndim = 2 # image dimension (2 or 3)
moving = batch_data["moving_hand"].to(device)
fixed = batch_data["fixed_hand"].to(device)
ddf = model(torch.cat((moving, fixed), dim=1))
print(f"affine {model.output_block.theta[0].reshape(-1, ndim, ndim + 1)}") |
Beta Was this translation helpful? Give feedback.
-
You may also want to look at this ongoing discussion: #8236 |
Beta Was this translation helpful? Give feedback.
-
Hello MONAI Community,
I am currently working on a project that involves the registration of 3D medical images and am interested in implementing a dual 3D CNN architecture that can perform this task by predicting the xyz translation parameters needed to align two volumetric images (fixed and moving). The network structure I'm considering ends with a fully connected layer outputting the xyz coordinates. This network takes two volumes as input and outputs 3 parameters (xyz translation) that would help in aligning the moving image to the fixed image.
I am wondering if there is an existing network or module within MONAI that closely matches this architecture or any guidance on adapting MONAI components to achieve this functionality.
Any advice or pointers towards relevant parts of the MONAI library or similar implementations would be greatly appreciated.
Thank you in advance for your help!
Beta Was this translation helpful? Give feedback.
All reactions