Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a basic example that outlines a basic deferred rendering approach #1633

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

MeFisto94
Copy link
Member

No description provided.

@stephengold stephengold added this to the v3.5.0 milestone Oct 26, 2021
@stephengold
Copy link
Member

stephengold commented Nov 3, 2021

Thanks for your contribution.

The added files need lots of explanatory comments: what do they do (at a high level), how, and why? Also, please add the JME copyright notice to the new files that lack it.

@stephengold stephengold added the examples specific to the jme3-examples sub-project label Nov 12, 2021
@stephengold
Copy link
Member

@MeFisto94 Are you still interested in this PR?

@MeFisto94
Copy link
Member Author

Yeah, sorry for this late reply, I've had it on my todo list but forgot about it multiple times.
We probably need to think about comments, because imo there is no point in explaining deferred rendering in-depth when there are scientific articles outlining it and my comments probably doing a much worse approach at explaining.

I guess you're more talking about the basics, though? Such as we're basically rendering everything into off screen buffers with the size of the screen and then later use a Quad Render (Filter) to compose and compute the lighting from the information that is present in said G-Buffer. And that we do this mainly to reduce costly vertex shader invocations (especially with animated models and high light counts) [which would be less of a problem when having a clever forward pipeline that we lack], but research has shown that this gives a lot of new opportunities as well though.

@pspeed42
Copy link
Contributor

I thought deferred rendering still has basically the same vertex invocations but it's the fragment invocations that are reduced? ...in the sense the the final render only does a full "pixel" render for things on the screen.

...I'm curious how deferred rendering would eliminate the need to at least render the shapes, though (ie: vertexes)

@stephengold
Copy link
Member

I guess you're more talking about the basics,

Yes, A brief summary (similar to what you wrote above) would be fine with me. As usual, people who want to know all the gory details can read the code and/or look at outside resources.

@MeFisto94
Copy link
Member Author

See that's why explaining that froms omeone that has merely understood the topic (me) is a bit risky, but maybe we can work together on something.

So let me sum up on the shader differences:
The Extraction Pass can save VertexShader invocations much like Single-Pass Forward Lighting can do: You don't need to run once per light but can run once per geometry (versus SP FL that runs once per n Lights). That's no big deal for an optimized Forward pipeline, but may be over stock jme. It also heavily depends, for animated and other VS heavy geometries, it's more relevant.
The Extraction Fragment Shader then mostly only does texture lookups, so you don't have heavy lighting calculations either.

For the merge pass then, you have the benefit that you don't run on geometries anymore but on the buffer resolution, so you have a lot less actual lighting fragment shader invocations and a lot less costly overdraw (since that already happened at extraction time with simpler shaders).

Also for reference: An actual better implementation (clustered deferred rendering), for which our API is not ready yet would add the following:
Pass in the world light list as SSBO/UBO, so the shader can indeed iterate over all of them and combine that with Light Culling preferrably in a Geometry Shader:
When we pass in 100s of lights in the fragment shader and do the light culling (branching) in there, we get a big performance hit by how GPUs parallelity works (every branch is calculated and just dropped later, for a batch of fragments).
If we instead iterate over an array of light indexes that are indeed in sight, we can efficientally look up lights that affect any given fragment. How that separation (clustering) should work and other things are then an implementation detail and subject of papers on that subject, but that's the basic idea to get the best out of your deferred pipeline (because otherwise you can easily end up slower than in the forward pipeline, if it's optimized).

I didn't do that yet, because we have a pending PR on SSBO but an, imo, even better jme fork from some guy that intended to contribute them back as well and I also had ideas on how to use annotations to create SSBO layouts etc, that we'd need to work on first, same with geometry and other shadertypes.

@stephengold stephengold modified the milestones: v3.5.0, Future Release Jan 23, 2022
@scenemax3d
Copy link
Contributor

Hi @MeFisto94
Is this PR somehow collides and/or overlaps with this one?
#1782
Can both of those PRs be merged without breaking each other?

Thanks :)

@MeFisto94
Copy link
Member Author

Hey @scenemax3d I think there is no collision, basically it was about #1782 being an excellent base for improving this example (but then it's not basic anymore) by using SSBO/UBOs, iirc.
So yes, they could be merged independently, but as it stands, this PR would need more work to be useful, but then it's also a template for someone incooporating the changes into their game anyway

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement examples specific to the jme3-examples sub-project
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants