Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An alternative method #22

Open
photopea opened this issue Apr 3, 2023 · 3 comments
Open

An alternative method #22

photopea opened this issue Apr 3, 2023 · 3 comments

Comments

@photopea
Copy link

photopea commented Apr 3, 2023

Hi Evan! I saw a post on Hacker News about you making thumbhash. But I think there already exists a simple compression method for tiny images, called BC1 (DXT1) texture compression - https://en.wikipedia.org/wiki/S3_Texture_Compression. It uses 4 bits per RGB pixel, I believe a decompressor can be made in under of 500 Bytes of code.

I wrote a comment about it - https://news.ycombinator.com/item?id=35268000 , and somebody actually implemented it - https://i.imgur.com/eIYZAmj.png (these are 8x8 pixel images, 32 Bytes each, enlarged with a bilinear interpolation).

It exists for decades and I wonder if you are aware of it. Maybe you could mention it in your repository, or do some comparison.

@photopea photopea changed the title A better way An alternative method Apr 3, 2023
@Erudition
Copy link

Based on your example image, the results are not nearly as pretty as blurhash or thumbhash IMO

@photopea
Copy link
Author

What do you mean by "pretty"? The goal is to depict the original image as closely as possible, not to be "pretty". I think the BC1 method definitely is less blurry (contains more details). The bilinear interpolation could be replaced with something else.

@Erudition
Copy link

Erudition commented May 22, 2023

The goal is to depict the original image as closely as possible

If that's your goal, go for it! I thought the same thing originally, but now I kinda agree with the the blurhash project, which is designed to make placeholder content less generic and less ugly while also giving a taste of the unique imagery to be loaded.

Here on their Github FAQ you can see why they default to a four by three sample, despite the fact that the library supports much higher sampling. After playing with the config supported by the previewer on their website, I came to the conclusion that indeed, more color stops is not usually nicer. This is of course algorithm-specific, but...

In general I think "contains more details" and "depicting the original as closely as possible" is actually the goal of a thumbnail, not a blurhash, which simply has the stated goal of "a compact representation of a placeholder for an image". Technically the former goal (max representation) is satisfied by a tiny low-res copy of the image that is blown up in all its pixelated glory... if that's not too ugly for you or your designers, a thumbnail is all you need!

example blurhash

By adding a blur, being choosy about the colors that dominate, etc, I'm conceding that being pretty is in fact the goal as well. So for me at least (and blur/thumbhash I believe) the goal is to balance aesthetics with accuracy - which I think thumbhash does best so far.

The best way I can explain it is: In these placeholders there's a threshold of detail, above which the human visual system works harder to disambiguate the blurriness, ultimately making it take longer to come to the same inevitable conclusion of "oh, I need to wait for the real thing to load". It's like the uncanny valley between "placeholder" and "picture". When looking at a page full of noisier/detailed placeholders, I feel this effect even more strongly.

fourbythree vs. max detail

This kinda example really highlights for me why more placeholder detail isn't always better - the latter gives me more colors (like puke) but in practice, doesn't actually bring me appreciably closer to understanding what's going on in the image - so it's just kinda extra cognitive load while I wait.

In the original image, I can realize that none of the objects on the table will ever be truly recognizable without high detail, so as a designer my instinct would be to fall back to the color of the table on which they sit - which the first blurhash (at the recommended sampling) does pretty well. The objects can then "place themselves" on that "blank" table when the full image loads. :P

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants