Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reading a slide image while retaining channel data? #213

Open
austinv11 opened this issue Feb 17, 2025 · 5 comments
Open

Reading a slide image while retaining channel data? #213

austinv11 opened this issue Feb 17, 2025 · 5 comments

Comments

@austinv11
Copy link

Hello,

I am using SOPA to do xenium explorer-based image alignment of a visium HD dataset with a paired IHC-based dataset. However, when I use sopa.io.wsi (it is a SVS file) or if i convert to ome.tif and read it with sopa.io.ome_tif, SOPA will read and convert the dataset to RGB. However, I require access to raw channel data. Any idea how to best deal with this?

@quentinblampey
Copy link
Collaborator

quentinblampey commented Feb 18, 2025

Hello @austinv11, indeed the WSI reader is assuming the images are in RGB, but the ome_tif reader tries to look for the correct channel name. I imagine you got a warning saying that it couldn't find any channel name when using sopa.io.ome_tif? Is so, could you please run the lines below to see what's the description of your image (I'm using it to look for the channel names).

import tifffile as tf

tiff = tf.TiffFile("/path/to/your/image.ome.tif") # update this
description = tiff.pages[0].description

print(description)

Also, how did you convert the image to ome-tif ?

@stergioc any idea how to look at channel names in the WSI reader?

@stergioc
Copy link
Collaborator

Hello @austinv11 and @quentinblampey ,

to be honest I am not sure if SVS files store the data in any other space than RGB. To my understanding the scanning is performed on the RGB space and stored as such. Then in case then you want to see the different channels (e.g., H&E or H&DAB) you will need to perform the stain deconvolution (see here on how it is performed on QuPath: https://qupath.readthedocs.io/en/stable/docs/tutorials/separating_stains.html).

Indeed the names of the channels might potentially be provided as metadata, if so it should be straightforward to retrieve them.

In case there are SVS files that somehow store the deconvolved stains I would be glad to take a look at this if there is a sample file.

@austinv11
Copy link
Author

austinv11 commented Feb 18, 2025 via email

@stergioc
Copy link
Collaborator

stergioc commented Feb 18, 2025

Thanks for the feedback on that. Did you try to use these stain deconvolution vectors from QuPath?

https://github.com/qupath/qupath/blob/5eb9405b25caf207df479b74c596ea0167fe15c8/qupath-core/src/main/java/qupath/lib/color/StainVector.java#L86-L88

It seems that the separate_stains is using a different stain deconvolution vector:

https://github.com/scikit-image/scikit-image/blob/a1946147d8df64d473fba26015d0815a8d9938ce/skimage/color/colorconv.py#L638

@austinv11
Copy link
Author

austinv11 commented Feb 18, 2025

Yeah, I directly used the stain vector values from QuPath (It displays them in the image attributes window). But my final attempt in python didn't match QuPath unfortunately. I think it does some sort of optical density correction or similar?

In case anyone finds it helpful, I ran this groovy script in qupath to deconvolve the stains into an ome.tif file:

// QuPath Script to deconvolve Brightfield images
import groovy.time.*

def timeStart = new Date()
import qupath.lib.common.GeneralTools
import qupath.lib.images.servers.TransformedServerBuilder
import qupath.lib.images.writers.ome.OMEPyramidWriter
import qupath.lib.images.servers.ImageServerMetadata

import static qupath.lib.gui.scripting.QPEx.*

// 1. Load current image data
def imageData = getCurrentImageData()
def stains = imageData.getColorDeconvolutionStains()

// 2. Create deconvolved server
def server = new TransformedServerBuilder(imageData.getServer())
    .deconvolveStains(stains)
    .build()

// 3. Clean metadata for OME-TIFF
server.setMetadata(
    new ImageServerMetadata.Builder(server.getMetadata())
        .name(GeneralTools.getNameWithoutExtension(
            imageData.getServer().getMetadata().getName()))
        .build()
)

// 4. Configure output parameters
def outputDir = buildFilePath(PROJECT_BASE_DIR, "ome-tiff")
mkdirs(outputDir)
def pathOutput = buildFilePath(outputDir,
    server.getMetadata().getName() + ".ome.tif")

println("Exporting OME-TIFF to: " + pathOutput)

// 5. Write pyramidal OME-TIFF
new OMEPyramidWriter.Builder(server)
    .tileSize(1024)
    .compression(OMEPyramidWriter.CompressionType.ZLIB)
    .scaledDownsampling(1.0, 2.0)
    .channelsInterleaved()
    .parallelize(4)
    .build()
    .writePyramid(pathOutput)

println("OME-TIFF exported: " + pathOutput)

def timeEnd = new Date()
TimeDuration duration = TimeCategory.minus(timeEnd, timeStart)
println("Time taken: " + duration.toString())

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants