Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Be more beginners friendly ! #3

Open
Lecrapouille opened this issue May 16, 2019 · 1 comment
Open

Be more beginners friendly ! #3

Lecrapouille opened this issue May 16, 2019 · 1 comment

Comments

@Lecrapouille
Copy link

Hi, this is really a nice work, this can become a greater tutorial but I think it needs some improvements because, I'm a beginner, and not everything is straightforward to understand. I have not yet completely read the whole article, I just stopped to first sections. The goal of this article is to be understandable for beginners but it uses a lot of terms not really introduced and therefore difficult for beginners to understand. Maybe because I have never used Panda3D and I'm extrapolating things with an OpenGL program.

Materials:

The first texture is the normal map and the second is the diffuse map.

You should introduce what is a normal map and a diffuse map. A least something that normal maps are textures holding mathematic vectors coded/shown as RGB colors. (R,G,B) = (X,Y,Z). I guess vectors are normalized and coding as a single byte.

If an object uses its vertex normals, a "flat blue" normal map is used.

Certainly! But because you did not say that vector == color it is difficult to get the point. By default with what color vectors are initialized to ? (0,0,1) ? I guess that if the Z axis is not used, the green component is never used => only red + blue are used so "flat blue"?

Then, why displaying a such big purpole figure ? Could be interessting to add a figure giving more informations like https://en.wikipedia.org/wiki/Normal_mapping#/media/File:Normal_map_example_with_scene_and_result.png by explaining goal of colors (ie. red -> shadow, green -> light).

By having the same maps in the same positions for all models, the shaders can be generalized and applied to the root node in the scene graph.

You should explain in few words what is a scene graph. As I guess it is, a graph structure where each node holds at least a change-basis matrix (from its parent node) and an optional 3d model. In your case, are nodes also hold shaders and are shaders of the parent nodes also applied to child nodes (like matrices)?

Concerning your computation:

round((0 * 0.5 + 0.5) * 255)

should not be simply round(0 * 255 + 0.5)? You should probably add a comment to say: for converting float to the clostest integer we use round(x + 0.5) coming from the fact float are converted to the lowest int and we want values in [0.0 ... 0.5[ rounded to 0.0 while values [0.5 .. 1.0[ rounded to 1.0.

round(255 / 255 * 2 - 1)

What is that? You should explain this formula. I had to go on Wiki to see that some axis are within -1 and 1 while other are within 0 and 1.

section GLSL

Instead of using the fixed-function pipeline, you'll be using the programmable GPU rendering pipeline.

Is this really a tutorial for beginners? :) I guess they probably never have programmed in Legacy OpenGL. I understand explaining again what is a non-fixed GPU pipeline is boring. Maybe you should only summarize it with fewer blocks diagrams and add links to tutorials explaining longer pipelines.

So a simple figure is enough:

      +-------------------+----------------------+
      |                   |                      |
      |                   V                      V
[Pand3D code] ==> [Vertex shader] ==> [Fragment shader] ==> [Framebuffer]

==> pipeline for shader attributes
--> pipeline for shader uniforms

This will also inroduce framebuffer for the next section.

Note the two keywords uniform and in... The in keyword means this global variable is being given to the shader.

Seems to me they are important elements of the shader, I would not define them inside a simple note but give to them a full paragraph of descriptions. And why defining out not with them? I would simplly introduce the input/output of shaders: the input of a fragment shader is the output of a vertex shader.

Render To Texture

Instead of rendering/drawing/painting directly to the screen

Technically, the screen is a framebuffer and it is the by-default bound framebuffer.

The textures bound to the framebuffer hold the vector(s) returned by the fragment shader. Typically these vectors are color vectors (r, g, b, a) but they could also be position or normal vectors (x, y, z, w).

Would be better placed when introducing normal maps in the 1st section.

each fragment shader in the example code has only one output. 

What is this output ?

texturing

Texturing involves mapping some color or some other kind of vector to a fragment using UV coordinates. 

In the figure just after you should display U and V axis in the picture and say that UV are the XY axis inside the texture frame.

@dahuigeniu
Copy link

good project thankyou

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants