-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rework the final-scale and make it possible to have a sharpen after downscaling #13682
Comments
What kind of sharpening will it be? If we use unsharp masking the a threshold could be introduced to avoid sharpening noise. I implemented sharpening after export with postsharpen.lua. I exported the image, then ran it through Imagemagick's convert usually using the unsharp sharpening. I also tried just using the sharpen function of convert, but got better results using unsharp because of the noise threshold. One thing I did notice was larger file sizes after sharpening. Marco Carrarini implemented Richardson-Lucy output sharpening in RL_out_sharp.lua. He used GMIC to do the sharpening. Since we link the gmic library, you might be able to use some of the sharpening routines in there. For that matter one of the magick libraries is linked in so that should be available too. I've pretty much stopped post sharpening. Too often the images were (almost) over detailed, especially after the introduction of Diffuse or Sharpen and the move to a higher resolution camera. |
Yes, RL algorithm does a really nice job and it doesn't amplify noise.
I also hate oversharpened images and my D&S preset is very mild, but I still find useful to apply some small output sharpening after downscaling |
Just some first comments from my side, if you want to understand how the pixelpipe works a reminder: Some points we have to remember:
So
How to proceed? Not sure, some ideas
|
This means that exporting could be very memory hungry and slow. Some modules require large amount of data and in HQ mode I had some crashes (I don't use this mode since long time because of this). But maybe the situation has improved as lot of work as been done to reduce memory, speed up modules and have better tiling where needed. |
You mean just for "exports"? The goal is really to sharpen after down-scaling. |
The underlying problem with exporting is the way dt uses memory (both system ram and OpenCL) so user experience might vary. We define how much memory might be used via preference - many people have set that to large or unrestricted in hope for performance. Unfortunately if you a) are exporting in the background while b) working in darkroom the memory taken by darkroom will likely double thus resulting in a) possibly oom killing or significant swapping and b) OpenCL fallbacks as graphics mem requirements could not be fullfilled as the tiling calculations were wrong. I don't see an "easy go" here if we want to keep exporting in the backgrounbd as a feature. Maybe we can keep some sort of state flag (exporting under way) that will automatically half the allowed memory but that can easily be a shoot in your feet. |
Yes, that was how i understood it - add some sort of "postprocessing" after scaling. Either down- or up-scale. |
Just a summary of what problems related to scaling & colorspace we have As mentioned in #13335 and #13635 we use scaling operators in the pixelpipe likely at wrong places resulting in more or less subtle errors. We do Input scaling while presenting image data to the darkroom and while exporting. This is fine for raws (we zoom in demosaic) and sraws (we zoom before rawprepare), fine because we scale on linear data. For other images like jpeg/tiff and friends, we scale on whatever-data-we-get via rawprepare. We don't Output scale in the darkroom, we got data scaled by demosaic and apply the output color later in the pixelpipe. (BTW gamma / display encodeing module is not involved here, it just either passes data or visualizes any mask) For exporting we have two modes. High quality is somewhat special. Behind the scenes / not visible to the user it enables the finalscale module and disables the scaling in the demosaicer. Good as all modules in the pipe between demosaic and finalscale are working on full data. So my proposal would be to modify the iop-order of finalscale and move it to a fixed position in the pixelpipe just before "colorout". I wouldn't be in favour of making this "to be changed by the user". Also a conversion to linear in finalscale, do the scaling and back-convert wouldn't be nice imho. A hidden "export tuning module" would have to go between finalscale and colorout. |
Do all the modules that have size-related parameters process with scaled parameters to produce the preview image? For example, those related to sharpening and blurring. After all, having a 3 pixel blur radius applied to a 6000x4000 pixel image, and having it applied to the same image scaled to 1500x1000 for preview will work differently. |
I'm not sure about this because after
|
BTW, this was planned for 4.4 so now rescheduling for 4.8. |
|
For the record: we might also have a look at "how do we scale". Our current algos don't work nicely with very high frequency content and 0<->1 signal transitions as found in some synthetic images. One example from troy sobotkas test set |
Thanks for keeping this on the radar. I just went back through much of the discussion and its a lot of things to consider but given DT has such a priority for handling pixel data and color "correctly" in all the modules I think it would be a disservice not to eventually sort it out. I almost wonder after some recent back and forth about ART vs DT and some people describing the ART look as cleaner can be attributed to the way things are ultimately displayed/exported in DT. Maybe sorting this all out eventually might just be worth it in terms fo the final presentation fo images from DT... Thanks go to you and others for your time and effort on this.... |
That is an interesting image and in addition to showing scaling impact on the preview it also really shows the difference between using yes and no for the HQ resampling when exporting. If you use zoom on that image there are some small changes as you zoom but at 100% there is a jump and a real change in the image.. esp on the horizontal gradient a lot of red just jumps to yellow and black regions change a lot.... When you export with HQR set to yes you get that look to the exported jpg with the provided srgb profile in DT, ie the 100% zoom. If set to no you clearly get a version with the red saturation that you see at any zoom not 100 200 400 or 800 %. Also exports with the icc profiles from color.org have some wild colors that dont' match the display preview but this could be the exr image and the lut table in those profiles vs the matrix profile of DT?? Not sure on that one |
Rethink / discuss the option of adding a post scale sharpen? |
FWIW ... this is what AP did for Ansel... https://ansel.photos/en/doc/modules/processing-modules/finalscale/ |
Why not just introduce a rescale module, then you can rescale the image and run as many modules as you want afterwards and export in "full resolution". |
For the final-scale there is discussion to happen here.
For the final sharpen there is also discussion to happen here.
I had this proposal:
This issue is to track down the discussion for the implementation of those two issues. Please keep this discussion focused.
EDIT: Initial discussion was in #13635.
EDIT: From reference, having the finalscale doing the job in linear data is a good goal for quality.
EDIT: Moving downscale before any tone curve (Sigmoid, Filmic...) means that all modules after that will work in down-scaled images, what is the impact on quality?
The text was updated successfully, but these errors were encountered: