Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

create DeepCTexture node #68

Open
falkhofmann opened this issue Dec 5, 2021 · 2 comments
Open

create DeepCTexture node #68

falkhofmann opened this issue Dec 5, 2021 · 2 comments

Comments

@falkhofmann
Copy link
Collaborator

I started already with a node which is in its basics very similar to the built-in DeepFromFrames.

So it would take a 2d RGBA texture and convert it into Deep. With a user defined front, back and samples. Main use case would be smoke, atmos and other typical atmospheric elements.

But i have some issues on how to read the actual 2d data inside the engine. the math on how to distribute and weight the samples is already existing in DeepCConstant and could be taken from there. this matches the DeepFromFrames behavior already.
I would just need some help to get this going.


Further ideas for this very tool would be:

  • falloff front and back
  • temporal offset from front to back to give it bit more complexity when objects moving across depth. so its not the same textures across all samples per pixel
@falkhofmann falkhofmann added the help wanted Extra attention is needed label Dec 5, 2021
@charlesangus
Copy link
Owner

What's the use case you're thinking of for this vs projecting on cards in ScanlineRender and adding "thickness" after the fact?

One other neat thing might be a map to drive the depth/front/back, so you could e.g. make quick fake z map out of luma channel for smoke or something, give it a bit of shape, more depth/samples in the denser parts.

@falkhofmann
Copy link
Collaborator Author

falkhofmann commented Dec 8, 2021

oh that sounds also interesting tbh.

my use case is fairly close to your described scenario. without any time offset from front to back in the actual texture it wouldnt make any difference. regardless if DeepFromFrames or via scanline renderer.
imagine a smoke plate you want to use as deep and perhaps hold out some character moving through. if you do it this via scanline renderer or deepfromframes, it will always look like a half blended texture. just multiplied down, based on the depth. since there are no different color/alpha samples between, its always the same value front to back.
in comparison, if you would have a timeoffset in the created deep where the front is Ie. 4 frames offsetted towards the back, you would see some more details/different values between the holdout and the front. so it wouldnt have the blended feeling in the deepholdout itself.

its fairly hard to describe i have to admit. but its one thing that bugs me for years. and i am not even sure if it would work properly with the timeoffset.

however, the good news are, i have a first working version similar to the deepfromframes. would need to make sure they are the same in outcome and would then continue to play with the timeoffset as well as a falloff in the deep front/back via user knobs.

in addition i could give your idea a go. as long as the base node is solid there might be even more scenarios to implement.

@falkhofmann falkhofmann removed the help wanted Extra attention is needed label Dec 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants