Skip to content

Latest commit

 

History

History
71 lines (51 loc) · 2.57 KB

README.md

File metadata and controls

71 lines (51 loc) · 2.57 KB

3DDualDomainAttention

3D-DDA: 3D Dual-Domain Attention For Brain Tumor Segmentation

Supported Python versions Framework: PyTorch Framework: MONAI


Author

Nhu-Tai Do , Hoang Son-Vo Thanh , Tram-Tran Nguyen-Quynh and Soo-Hyung Kim

Equal Contribution


[Slide] [Paper] [Project Page] [Present]


Abstract

Grad-cam visualization of the encoding feature map at three axes in DynUnet with/without 3D-DDA

Accurate brain tumor segmentation plays an essential role in the diagnosis process. However, there are challenges due to the variety of tumors in low contrast, morphology, location, annotation bias, and imbalance among tumor regions. This work proposes a novel 3D dual-domain attention module to learn local and global information in spatial and context domains from encoding feature maps in Unet. Our attention module generates refined feature maps from the enlarged reception field at every stage by attention mechanisms and residual learning to focus on complex tumor regions. Our experiments on BraTS 2018 have demonstrated superior performance compared to existing state-of-the-art methods


Method

3D Dual-domain Attention attached into DynUnet backbone at four stages

More detail…

3D-DDA block details.

Paper

[PDF]

Inference

Update soon

Citation

@INPROCEEDINGS{10222602,
  author={Do, Nhu-Tai and Vo-Thanh, Hoang-Son and Nguyen-Quynh, Tram-Tran and Kim, Soo-Hyung},
  booktitle={2023 IEEE International Conference on Image Processing (ICIP)}, 
  title={3D-DDA: 3D Dual-Domain Attention for Brain Tumor Segmentation}, 
  year={2023},
  volume={},
  number={},
  pages={3215-3219},
  doi={10.1109/ICIP49359.2023.10222602}}