-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Multi-Modal Image+Text] Process the volumetric images from UPMC dataset and try a simple method #37
Comments
OK, so this is not the ITK code, this is just averaging, right?
…On Sun, Oct 14, 2018 at 1:30 PM Sumedha Singla ***@***.***> wrote:
In the first try: We converted 3d image to 2d by taking an average over x,
y and z direction. The UPMC reports are passed through NLP pipeline to
extract 14 disease labels. So that we can run multi labe classification
problem on them.
The quality of image obtained in this way is very bad. Not a very 3d to 2d
conversion.
Example:
[image: image]
<https://user-images.githubusercontent.com/13970739/46919963-45178680-cfb5-11e8-800e-fa839dc55233.png>
1. Average over middle 20% of the slices.
2. Average over all the slices
3. A single middle slice.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#37 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ALu03Q_NX2DoxI0sTt31DBFmrEcVe43Rks5uk3TSgaJpZM4VRWr_>
.
|
No this is just averaging. I am posting new comments with those results. |
The subjects in UPMC dataset are not always in same orientation. |
This method can solve the projection problem, please try it: This is from this paper: |
Convert the 3d volumetric image to 2d x-ray
Run simple model for feature extraction and classification.
Extract classification labels from UPMC reports by passing through NLP pipeline and extracting 14 disease labels
The text was updated successfully, but these errors were encountered: