Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Multi-Modal Image+Text] Process the volumetric images from UPMC dataset and try a simple method #37

Open
3 tasks
sumedhasingla opened this issue Jul 16, 2018 · 6 comments
Assignees

Comments

@sumedhasingla
Copy link
Contributor

sumedhasingla commented Jul 16, 2018

  • Convert the 3d volumetric image to 2d x-ray

  • Run simple model for feature extraction and classification.

  • Extract classification labels from UPMC reports by passing through NLP pipeline and extracting 14 disease labels

@sumedhasingla sumedhasingla changed the title [Multi modal Image+Text] Process the volumetric images and try a simple method [Multi modal Image+Text] Process the volumetric images from UPMC dataset and try a simple method Jul 16, 2018
@sumedhasingla sumedhasingla changed the title [Multi modal Image+Text] Process the volumetric images from UPMC dataset and try a simple method [Multi-Modal Image+Text] Process the volumetric images from UPMC dataset and try a simple method Jul 16, 2018
@sumedhasingla
Copy link
Contributor Author

sumedhasingla commented Oct 14, 2018

In the first try: We converted 3d image to 2d by taking an average over x, y and z direction.
The quality of image obtained in this way is not good. Not a very 3d to 2d conversion.

Example:
image

  1. Average over middle 20% of the slices.
  2. Average over all the slices
  3. A single middle slice.

@kayhan-batmanghelich
Copy link
Collaborator

kayhan-batmanghelich commented Oct 14, 2018 via email

@sumedhasingla
Copy link
Contributor Author

sumedhasingla commented Oct 14, 2018

No this is just averaging. I am posting new comments with those results.

@sumedhasingla
Copy link
Contributor Author

sumedhasingla commented Oct 14, 2018

In second effort we tried sampling few slices 2D slices from 3D CT scan, use multiple slices. There is higher chance of missing the regions with findings in this approach. Not a good approach as well.
Some samples are placed at location: /pghbio/dbmi/batmanlab/singla/Image_Text_Project/Data_Image_Slices_ChestXRay_Terms/
Example: p113577_a117962
image

@sumedhasingla
Copy link
Contributor Author

sumedhasingla commented Oct 14, 2018

The subjects in UPMC dataset are not always in same orientation.
In third try, we used ITK ITKTwoProjectionRegistration to get 2D projection from 3D CT scan. More details about the method can be found at: http://www.insight-journal.org/browse/publication/784
Git hub link: https://github.com/InsightSoftwareConsortium/ITKTwoProjectionRegistration
It is available a external module in ITK.
Example
image

@kayhan-batmanghelich
Copy link
Collaborator

@sumedhasingla

This method can solve the projection problem, please try it:

image

This is from this paper:
EMPHYSEMA QUANTIFICATION ON SIMULATED X-RAYS THROUGH DEEP LEARNING TECHNIQUES

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants