Skip to content

Advanced Tutorial

Josh edited this page Apr 14, 2024 · 23 revisions

Onyx - Advanced Tutorial

Welcome to the advanced tutorial! It is recommended to do the Basic Tutorial before coming here, but if you already have, great!

Let's get started with some boring boilerplate code, we can just use all the code we left off with for the basic tutorial.

#include <string>

#include <Onyx/Core.h>
#include <Onyx/Math.h>
#include <Onyx/Window.h>
#include <Onyx/InputHandler.h>
#include <Onyx/RenderablePresets.h>
#include <Onyx/Renderer.h>
#include <Onyx/Camera.h>
#include <Onyx/TextRenderer.h>

using Onyx::Math::Vec2, Onyx::Math::Vec3;

int main()
{
    Onyx::ErrorHandler errorHandler(true, true, false);
    Onyx::Init(errorHandler);

    Onyx::Window window("Onyx Tutorial", 1280, 720);
    window.init();
    window.setBackgroundColor(Vec3(0.0f, 0.2f, 0.4f));

    Onyx::InputHandler input(window);

    Onyx::Texture container = Onyx::Texture::Load(Onyx::Resources("textures/container.jpg"));
    Onyx::Renderable cube = Onyx::RenderablePresets::TexturedCube(1.0f, container);

    Onyx::Camera cam(window, Onyx::Projection::Perspective(60.0f, 1280, 720));
    cam.translateFB(-2.0f);

    Onyx::Renderer renderer(window, cam);
    renderer.add(cube);

    Onyx::TextRenderer textRenderer(window);
    Onyx::Font roboto = Onyx::Font::Load(Onyx::Resources("fonts/Roboto/Roboto-Bold.ttf"), 48);
    textRenderer.setFont(roboto);

    input.setCursorLock(true);

    const float CAM_SPEED = 4.0f;
    const float CAM_SENS = 50.0f;

    while (window.isOpen())
    {
        input.update();

        if (input.isKeyDown(Onyx::Key::Escape)) window.close();

        if (input.isKeyDown(Onyx::Key::W))      cam.translateFB( CAM_SPEED * window.getDeltaTime());
        if (input.isKeyDown(Onyx::Key::A))      cam.translateLR(-CAM_SPEED * window.getDeltaTime());
        if (input.isKeyDown(Onyx::Key::S))      cam.translateFB(-CAM_SPEED * window.getDeltaTime());
        if (input.isKeyDown(Onyx::Key::D))      cam.translateLR( CAM_SPEED * window.getDeltaTime());
        if (input.isKeyDown(Onyx::Key::Space))  cam.translateUD( CAM_SPEED * window.getDeltaTime());
        if (input.isKeyDown(Onyx::Key::C))      cam.translateUD(-CAM_SPEED * window.getDeltaTime());

        cam.rotate(input.getMouseDeltas().getX() * 0.005f * CAM_SENS, input.getMouseDeltas().getY() * 0.005f * CAM_SENS);

        cam.update();

        cube.rotate(20.0f * window.getDeltaTime(), Vec3(1, 1, 1));

        window.startRender();
        renderer.render();

        Onyx::TextRenderer::StartRender();
        textRenderer.render("FPS: " + std::to_string(window.getFPS()), Vec2(20.0f, window.getBufferHeight() - roboto.getSize() - 7.0f), 0.8f, Vec3(0.0f, 1.0f, 1.0f));
        textRenderer.render(
            "XYZ: (" + std::to_string((int)cam.getPosition().getX()) + ", " +
            std::to_string((int)cam.getPosition().getY()) + ", " +
            std::to_string((int)cam.getPosition().getZ()) + ")",
            Vec2(20.0f, window.getBufferHeight() - roboto.getSize() * 1.6f - 7.0f), 0.5f,
            Vec3(1.0f, 1.0f, 1.0f)
        );
        Onyx::TextRenderer::EndRender();
        window.endRender();
    }

    window.dispose();
    renderer.dispose();
    Onyx::Terminate();

    return 0;
}

Running this should get the same old textured spinning cube with an FPS counter and position on the top left. Let's get into the big topics of the advanced tutorial!

Lighting

Before I say anything - change the textured cube to a colored one, it will make the lighting pop out more.

// replace cube creation
Onyx::Renderable cube = Onyx::RenderablePresets::ColoredCube(1.0f, Vec3(1.0f, 0.0f, 0.0f));

Ok, now we can get started. See how its all just one blob, and if it wasn't rotating it would just look like a weird 2D shape? That's because there's no lighting; no shading. We need to add some lighting to the scene, consisting of a color, ambient strength, and direction; the brightness of a face of the cube will be based off of how directly the light is hitting it. If you want crazy fancy lighting with shininess and reflections and whatnot, head over to Unreal Engine - we don't do that here, Onyx is nice and lightweight for simple games. Enough talk, let's get to it!

We can create a Lighting object and set it to the renderer's lighting in the constructor, and that's all we have to do. Include Onyx/Lighting.h and create a Lighting object. The first argument is the color of the light. The color of the light is multiplied by the object's color, so if any channel (R, G, and B individually are called channels) is 0, then that channel will not be visible on the object, period. So, if you want colored light, try setting the channels you don't want to something like 0.8, or at least not 0. You should also do the same for the colors of your objects if you want colored light, since you'll encounter the same thing if a color channel of your object is 0. Obviously, if you're using a texture, you probably don't have to worry about this. But I'm just going to use white light, simulating the sun (no, it's not yellow, just maybe a little orange at sunrise/set). The second argument is the ambient strength of the light. This is the brightness of a face of an object that is receiving no light at all, i.e. facing completely away from the light. You can play around with this, but I like something like 0.3 (it should range 0-1). The third argument is the direction vector of the light. If you don't know much about vectors representing directions (that is what they are meant to represent, but we have been using them as coordinates or RGB values), you can just copy the direction I use: <0.2, -1.0, -0.3>. (Directional vector notation uses carets <>, don't ask me why.)

// replace renderer creation
Onyx::Lighting lighting(Vec3(1.0f, 1.0f, 1.0f), 0.3f, Vec3(0.2f, -1.0f, -0.3f));
Onyx::Renderer renderer(window, cam, lighting);

If you move a little to the right after starting the program, you should see this:
Image 8
Now it actually looks like a cube! It still doesn't look too special, but don't worry, you'll get HUGE results from lighting when we get into model loading.

Just for fun, let's put the texture back on the cube, and maybe make the light color change over time. Onyx::GetTime() gives us the time in seconds since the window was initialized, so we can use some trig on the time for some color changing results. To actually update the renderer's lighting, we need to call refreshLighting() on it.

// replace cube creation
Onyx::Texture container = Onyx::Texture::Load(Onyx::Resources("textures/container.jpg"));
Onyx::Renderable cube = Onyx::RenderablePresets::TexturedCube(1.0f, container);

// in mainloop
lighting.setColor(Vec3(std::max(sin(Onyx::GetTime()) * 2.0f, 0.5), std::max(cos(Onyx::GetTime()) * 0.7f, 0.5), std::max(tan(Onyx::GetTime()) * 1.3f, 0.5)));
renderer.refreshLighting();

Here's the reason we need to refresh the lighting: Onyx tries to save some performance by not updating the shader variables of renderables every frame, instead it only updates them when it needs to - in this case: when setLighting() is called, the shader variables relating to lighting of all renderables are updated to the new lighting, and when a renderable is added to the renderer, the shader variables relating to lighting of just that renderable are updated to the current lighting. However, in cases like this, we may need to override this behavior, and so the renderer provides the refreshLighting() function to do so.

You should see some wacky results from this:
Image 9

Custom Renderables

Let's get rid of that cube renderable and make our own from scratch! This is a hefty topic, so it'll be divided into some subsections.

What are Renderables?

Renderables consist of three parts: the mesh, the shader, and the texture. The mesh contains the position and shape of the object, the shader does a few different things that we'll talk about, and the texture is pretty self explanatory at this point (many renderables, as we've seen, do not use a texture).

The Mesh

The mesh specifies the individual vertices that make up the object, and the indices that tell OpenGL how to draw the object.

Vertices may contain positional information (the most important), normal information (used for lighting), texture coordinate information, and/or color information (probably the least common, not seen in models). The positional information consists of 3D points in space, hopefully you're familiar with that. The normal information consists of 3D directional vectors in space, specifically ones perpendicular to the surface of the object. The lighting on an object will be the strongest if the normal vector and the light direction vector are pointed straight at each other, that's how it's calculated. The texture coordinate information consists of what are called UV mappings, essentially just the XY mappings, ranging from 0 to 1, on a texture image. (0, 0) is the bottom left of a texture, (1, 1) is the top right. Lastly, the color information consists of the 3 RGB values we're used to, as well as a 4th 'alpha' value, AKA the opacity. All of this information is specified in one array of floats, on a per-vertex basis. This will become more clear when we make a mesh ourselves, by far the hardest part of making custom renderables.

Indices tell OpenGL what vertices to use to draw triangles that will eventually make up the object. Every object is rendered from triangles, connecting 3 of its vertices together. Each index (indices is the plural of index, to be clear) simply refers to one vertex, the first vertex specified being 0, the second 1, 2, and so on, like arrays.

If we are just drawing one triangle, we would have 3 vertices for the 3 points on the triangle, and the indices would simply be (0, 1, 2). Or (1, 2, 0), or (2, 0, 1), it doesn't matter.

If we are drawing a square, however, we would have 4 vertices for the 4 points on the triangle, but the indices would have to specify two triangles to make up the square. Let's say our vertices are laid out as follows: top-left, top-right, bottom-right, bottom-left. Our indices would be: (0, 1, 2), (2, 3, 0). Hopefully you can visualize that and make it make sense. Again, the indices of the triangles themselves can be flipped around and it won't matter (we could instead use (2, 1, 0), (0, 3, 2)), but you have to make sure the triangles you're drawing correctly make up the object.

Alright, that's the hardest part out of the way, let's move on.

The Shader

Shaders are programs that run on the GPU of your system. They are written in a language called GLSL, and while we could write and create our own completely custom shaders, we're going to use some of the many shader presets Onyx already has for us. If you're curious, you can take a peek at some shader code in the resources/shaders folder.

Shaders, at least how Onyx uses them, consist of two parts: the vertex shader and the fragment shader. All these details really aren't important, but are good to know. The vertex shader converts our positional coordinates, using the transformations we've applied to the renderable, the camera, and the projection of the camera, to coordinates on our screen. It displays these 3D objects onto our 2D screen. The fragment shader takes every individual fragment, essentially a pixel, and calculates its color. In the colored triangle preset we've used, the fragment shader is as simple as setting every pixel's color to the color we passed to the renderable preset function. In the textured cube preset, on the other hand, the fragment shader uses a factor calculated in the vertex shader specifying how direct the light is, and applies that brightness to the part of the texture specified by the texture coordinates in the vertices of the mesh. It's all complicated, not really necessary to understand it all.

The Texture

The texture is pretty simple, it's an image applied to a renderable. The UV mappings in the vertices we talked about earlier correspond to coordinates in the texture image, and the fragment shader interpolates these coordinates in each vertex to generate a coordinate for each fragment, and takes the color of the texture at that coordinate to generate the color of that pixel on the screen, along with any lighting calculations.

If the renderable doesn't have a texture, then it doesn't use a shader that uses a texture. If we try to use a shader that requires a texture but we don't give the renderable a texture, the color will be calculated incorrectly; it will probably just be black. Actually, this is a good quick note - we can use a mesh that provides more data than the shader requires, but we can't use a shader that requires more data than the mesh provides.

Let's Make a Renderable!

Let's make a tetrahedral. It sounds fancy, it's really just a shape made from 4 points - 3 points specifying the base triangle, and one point on the top. From this, we get a base triangle and 3 side triangles.
Image 10

Let's make the 3 points for our base. It will change on the X and Z axes, and stay constant on the Y axis, since it is flat. Or XZ coordinates, to center around the 0, 0 origin, should be: (-1, -1), (1, -1), (0, 1). Make sense? Then, our top coordinate will just be in the center of the XZ plane and our Y coordinate will be 1. So, our 3D XYZ positional coordinates are as follows: (-1, 0, -1), (1, 0, -1), (0, 0, 1), (0, 1, 0). Let's make an array of vertices to show this:

float vertices[] = {
    -1.0f, 0.0f, -1.0f,
     1.0f, 0.0f, -1.0f,
     0.0f, 0.0f,  1.0f,
     0.0f, 1.0f,  0.0f
};

Great, now what will our indices look like? Well, we'll start of making a triangle out of the base, which would just be (0, 1, 2). Now, we need 3 triangles made out of 2 consecutive vertices of the base, and the top vertex. So, for organization, our third index can always be the top vertex, and the first two can be the consecutive vertices of the base, starting at 0, 1. So here are those 3 sets of indices: (0, 1, 3), (1, 2, 3), (2, 0, 3). Note how we roll back to 0 for the second index of that last set. Let's code this:

uint indices[] = {
    0, 1, 2,
    0, 1, 3,
    1, 2, 3,
    2, 0, 3
};

Now we can make a mesh out of this! A mesh needs a VertexArray object and an IndexArray object - these objects are very simple, they just need the array data and the size of the arrays, and the VertexArray also needs the format of our vertices. Based on what data the vertices may contain, we may have positional information, known also as just vertices (V), normals (N), texture coords (T), and/or colors (C). So the available formats include V, VN, VC, VT, VNC, VNT, and VNCT. We just have positional information, so our format will be V.

Onyx::Mesh mesh(
    Onyx::VertexArray(vertices, sizeof(vertices), Onyx::VertexFormat::V),
    Onyx::IndexArray(indices, sizeof(indices))
);

We won't use a texture for now, and our shader will have to be one compatible with the V vertex format. The only fitting one is V_Color. Sorry if these names our confusing - I tried - but shader names consist of the vertex format they are compatible with (unless it's a UI shader), and then an underscore with any sort of extension they may have, this one being that it can take a color. We will need to include Onyx/ShaderPresets.h. Additionally, now that we now how to make colors using RGB coordinates 0-1 from the basic tutorial, we can be lazy and use some color presets in the vector class. The color will be a Vec4, this time, the fourth value being the alpha value. Let's add Vec4 to our using directive up top: using Onyx::Math::Vec4;.

Onyx::Shader shader = Onyx::ShaderPresets::V_Color(Vec4::Green());

Great, now we can create a renderable by simply passing in this mesh and shader, and add it to the renderer!

Onyx::Renderable tetrahedral(mesh, shader);
// ...
renderer.add(tetrahedral);

Now run the program, let's see what we've got!
Image 11
Well, it's kinda just a blob, even worse then the cube. We could add some normal vectors, but that would be too mathy. Instead, we're going to render in wireframe mode, which will show us the outlines of our triangles. All we have to do is use the renderer's static SetWireframe function:

Onyx::Renderer::SetWireframe(true);

Image 12
That's better. Let's add a texture to it.

But, there's a problem - each face needs its own 3 texture coordinates, which means we now need 3 vertices per face. This kind of defeats the whole point of the vertex and index system, but it's important to understand that when loading huge models, it does save lots of data.

Anyways, we now need 12 vertices for our textured tetrahedral, and now our indices will just be 0-11 in order. Each face will just use the bottom-left of the texture, the bottom-right, and then the middle-top, just like how our positional information is defined for our base triangle. So our texture coordinates for every face will just be (0, 0), (1, 0), (0.5, 1). Here's how all that vertex and index data would look:

float vertices[] = {
    // positions            // texture coords
    -1.0f, 0.0f, -1.0f,     0.0f, 0.0f,
     1.0f, 0.0f, -1.0f,     1.0f, 0.0f,
     0.0f, 0.0f,  1.0f,     0.5f, 1.0f,

    -1.0f, 0.0f, -1.0f,     0.0f, 0.0f,
     1.0f, 0.0f, -1.0f,     1.0f, 0.0f,
     0.0f, 1.0f,  0.0f,     0.5f, 1.0f,

     1.0f, 0.0f, -1.0f,     0.0f, 0.0f,
     0.0f, 0.0f,  1.0f,     1.0f, 0.0f,
     0.0f, 1.0f,  0.0f,     0.5f, 1.0f,

     0.0f, 0.0f,  1.0f,     0.0f, 0.0f,
    -1.0f, 0.0f, -1.0f,     1.0f, 0.0f,
     0.0f, 1.0f,  0.0f,     0.5f, 1.0f
};

uint indices[] = {
    0, 1, 2,
    3, 4, 5,
    6, 7, 8,
    9, 10, 11
};

Now, we need an actual texture to apply to the renderable. We can just use the container texture from before, or feel free to grab some different one from the web. We also need to change our vertex format to VT, since that is how our vertices are laid out, and we need to change our shader to the VT shader. This shader doesn't have any extras, so there's no underscore something, just the vertex format. Here's how all of that works out:

Onyx::Mesh mesh(
    Onyx::VertexArray(vertices, sizeof(vertices), Onyx::VertexFormat::VT),
    Onyx::IndexArray(indices, sizeof(indices))
);

Onyx::Shader shader = Onyx::ShaderPresets::VT();

Onyx::Texture container = Onyx::Texture::Load(Onyx::Resources("textures/container.jpg"));

Onyx::Renderable tetrahedral(mesh, shader, container);

And, if you turned wireframe off, you should get this:
Image 13

Let's spice it up one last time - we can assign colors to each vertex along with these texture coordinates! Let's assign red to the bottom-left of the base, yellow to the bottom-right, green to the middle-top, and blue to the top of the whole thing (the 4th value will always be 1.0f, that's just the opacity):

float vertices[] = {
    // positions            // colors                   // texture coords
    -1.0f, 0.0f, -1.0f,     1.0f, 0.0f, 0.0f, 1.0f,     0.0f, 0.0f,
     1.0f, 0.0f, -1.0f,     1.0f, 1.0f, 0.0f, 1.0f,     1.0f, 0.0f,
     0.0f, 0.0f,  1.0f,     0.0f, 1.0f, 0.0f, 1.0f,     0.5f, 1.0f,

    -1.0f, 0.0f, -1.0f,     1.0f, 0.0f, 0.0f, 1.0f,     0.0f, 0.0f,
     1.0f, 0.0f, -1.0f,     1.0f, 1.0f, 0.0f, 1.0f,     1.0f, 0.0f,
     0.0f, 1.0f,  0.0f,     0.0f, 0.0f, 1.0f, 1.0f,     0.5f, 1.0f,

     1.0f, 0.0f, -1.0f,     1.0f, 1.0f, 0.0f, 1.0f,     0.0f, 0.0f,
     0.0f, 0.0f,  1.0f,     0.0f, 1.0f, 0.0f, 1.0f,     1.0f, 0.0f,
     0.0f, 1.0f,  0.0f,     0.0f, 0.0f, 1.0f, 1.0f,     0.5f, 1.0f,

     0.0f, 0.0f,  1.0f,     0.0f, 1.0f, 0.0f, 1.0f,     0.0f, 0.0f,
    -1.0f, 0.0f, -1.0f,     1.0f, 0.0f, 0.0f, 1.0f,     1.0f, 0.0f,
     0.0f, 1.0f,  0.0f,     0.0f, 0.0f, 1.0f, 1.0f,     0.5f, 1.0f
};

And once we change the vertex format and the shader to VCT, we get this cool result:
Image 14
Remember when I said: we can use a mesh that provides more data than the shader requires, but we can't use a shader that requires more data than the mesh provides? Well, our mesh provides position, color, and tex coord data, meaning we can still use the V_Color shader and assign a color to all vertices, or the VC shader to ignore the texture, and it will still work perfectly. Try it out!

That's gonna be it for the renderable section. All of this knowledge may go away when we get to model loading, but I think it's good to know.

Model Loading

It's time for the big guns. This section will be way shorter than the renderable section (don't worry, they all will), and way cooler.

Onyx can load an OBJ file and turn into a ModelRenderable object, and it's so easy. First, let's download a model to use. OBJ files, most of the time, come with complementary MTL (material) files to color the objects, and the sometimes come with textures as well. I've compiled this animal model pack into a nice zip folder that you can download here, and extract the contents of that file into your resources/models folder. Make sure that you have Animals.obj and Animals.mtl in the resources/models folder, and a large collection of images in resources/models/AnimalTextures.

Alright, now to load a model, we use a Model class and its static LoadOBJ function with the filepath, which for us will be Onyx::Resources("models/Animals.obj"). We then create a ModelRenderable object and just give it the loaded Model object. We can do this all on one line:

Onyx::ModelRenderable animals(Onyx::Model::LoadOBJ(Onyx::Resources("models/Animals.obj")));

Now, you can add animals to the renderer just like you would any other renderable. Now, make sure you still have lighting for the renderer from the first section, and run the program.
Image 15
Voila!
What Onyx is really doing here is parsing the OBJ (and MTL) file into data structures and then converting the data into the sets of vertices (positions, normals, texture coords) and indices that it can work with.

To really see the effect of lighting, let's add a keybind that will toggle lighting. The renderer, as well as many other classes, can handle toggling settings for us, so this is all we have to do (I'm going to use the L key):

// in mainloop
if (input.isKeyDown(Onyx::Key::L)) renderer.toggleLightingEnabled();

If you run it now, though, you may notice a problem - this lighting toggle is run every frame if the L key is pressed down, so if we hold it down for a duration longer than just 2 frames, it will be run twice and just flash for a second.

Key Cooldowns

Luckily, the InputHandler class has a solution: cooldowns. We can add cooldown durations (in seconds) to each and every key and/or mouse button. Here's all we have to do:

// before mainloop
input.setKeyCooldown(Onyx::Key::L, 0.5f);

Now, there is a 0.5 sec cooldown for the L key. If you hold it down, it perfectly illustrates the importance of lighting:
Image 16

A Quick Tip

When downloading OBJ and MTL files, be careful renaming them. The OBJ file references the MTL file by name, so if you change the name of the files, you need to change that reference as well - in the OBJ file, near the beginning, it will read mtllib <name>.mtl, and must match the filename of the MTL file. Additionally, in the MTL file, make sure all texture paths are accurate. The texture that Onyx uses is the map_Kd variable in MTL files, so make sure the path after every map_Kd label is accurate (many model uploaders keep the filepaths absolute, so they are only valid on their machine. You will have to fix this often).

UI (User Interface)

Apart from the text rendering, everything we've rendered has been affected by our projection and camera movement. But lots of the time we don't want that, we want UI - like buttons, menus, etc. Lucky for us, there is a UiRenderable class for just that! There are no presets for this class, however, so that's why I've saved it for the advanced tutorial - specifically after the custom renderables section. That knowledge will help us a lot here, although it's not as complicated.

All coordinates in OpenGL are 3D. OpenGL doesn't know if we want to render something as UI or not, so all positional coords are 3D, period. So, for UI, we'll just set the Z coordinate to 0. The main difference about UI rendering is that 1) the shaders used don't factor in the camera POV to render, so the object stays static on the screen, and 2) the coordinates we enter are SCREEN coordinates, not world coordinates. That means the range is (0-1280, 0-720). Let's make a little triangle UI object.

Similar to custom renderables, we define the vertices and indices first. Let's make a triangle, 300 pixels wide and tall, that will cover part of the bottom left of our screen, and make a Mesh out of it, hopefully you're good with this by now:

float vertices[] = {
    0.0f,   0.0f,   0.0f,
    300.0f, 0.0f,   0.0f,
    150.0f, 300.0f, 0.0f
};

uint indices[] = {
    0, 1, 2
};

Onyx::Mesh mesh(
    Onyx::VertexArray(vertices, sizeof(vertices), Onyx::VertexFormat::V),
    Onyx::IndexArray(indices, sizeof(indices))
);

Making a UiRenderable doesn't require explicitly defining a shader, it's done on the inside. We can either make a colored UI renderable, or a textured one. For now, we'll make it colored, and we'll use a slightly transparent orange, making the alpha value 0.5. Then, we can add it to the renderer, as usual (you can keep the animals model).

Onyx::UiRenderable triangle(mesh, Vec4::Orange(0.5f));
// ...
renderer.add(triangle)

You should get this, being able to see through the triangle:
Image 17

Alright, let's make a textured UI renderable now. I found some random transparent background PNG of a Santa hat, I'm gonna use that. Here's a direct download link, add the image to your resources/textures folder: Download
(I renamed it to just santaHat.png)

Alright, so we're going to redo our vertices and indices, just to make a square with some self-explanatory texture coordinates. Let's make it in the middle of the screen, and maybe 200 px wide and tall. We'll also have to change the vertex format of the mesh to VT.

float vertices[] = {
    // positions                        // tex coords
    1280/2 - 100, 720/2 - 100, 0,       0, 0,
    1280/2 + 100, 720/2 - 100, 0,       1, 0,
    1280/2 + 100, 720/2 + 100, 0,       1, 1,
    1280/2 - 100, 720/2 + 100, 0,       0, 1
};

uint indices[] = {
    0, 1, 2,
    2, 3, 0
};

Onyx::Mesh mesh(
    Onyx::VertexArray(vertices, sizeof(vertices), Onyx::VertexFormat::VT),
    Onyx::IndexArray(indices, sizeof(indices))
);

(1280/2, 720/2) gets the exact center of the screen, and we subtract and add 100 to get a width and height of 2(100) = 200.

Now we can just give the UiRenderable constructor a texture instead of a color:

Onyx::UiRenderable santaHat(mesh, Onyx::Texture::Load(Onyx::Resources("textures/santaHat.png")));
// make sure to add 'santaHat' to the renderer now, not 'triangle'

Image 18
I put the hat on the cow 😄

We can also transform UI renderables, and it's much friendlier since there's only (effectively) 2 axes. Let's rotate it and scale it down each frame.

// in mainloop
santaHat.rotate(10.0f * window.getDeltaTime());
santaHat.scale(1 - 0.2f * window.getDeltaTime());

Image 19
Uhh... that's weird. What's going on?

Well, all meshes scale and rotate around the point (0, 0), the origin (All of what I am about to say goes for 3D space also, not just UI, the origin obviously being (0, 0, 0)). Well, not exactly. They scale around the origin relative to their original vertex coordinates, but transformations don't affect their original vertex coordinates. So, if we want the hat to rotate around its center, we need to define its vertex coordinates around 0, 0, and then translate to the middle of the screen. But that's not too hard, let's do it!

// redefine vertices to be centered around 0, 0
float vertices[] = {
    // positions         // tex coords
    -100, -100, 0,       0, 0,
     100, -100, 0,       1, 0,
     100,  100, 0,       1, 1,
    -100,  100, 0,       0, 1
};
// ...
// just after UiRenderable creation
santaHat.translate(Vec2(1280/2, 720/2));

Image 20
Much better.

Texture Wrap Options

Quick detour - we can change the way our texture behaves when our UV coordinate goes above 1. (This all also applies to 3D renderables.) Let's change all the 1's in our texture coords to 2's, and see what happens (I'm gonna remove the rotating and scaling).
Image 21
Alright, so the default behavior is to repeat the texture. Let's take a look at the other two options, MirroredRepeat and ClampToEdge. To specify a texture wrap option other than Repeat, the default, we can enter a TextureWrap enum as the second argument to the Texture::Load function; we'll just start with Repeat. I'm gonna switch back to the container texture, but keep the variable name santaHat because I don't feel like switching it.

Onyx::UiRenderable santaHat(mesh, Onyx::Texture::Load(Onyx::Resources("textures/container.jpg"), Onyx::TextureWrap::Repeat));

Default (Repeat):
Image 22
MirroredRepeat:
Image 23
ClampToEdge:
Image 24

As you can see, MirroredRepeat mirrors the image (although it is hard to tell with the container), and ClampToEdge just extends the color of pixel on the edge of the texture. Just wanted to throw this out there, that's all.

Behaviors

Coming in v0.9.0

The Error Handler

This will be a very quick section. When we initialize the library, we have the option to give it an error handler. Recall:

Onyx::ErrorHandler errorHandler(true, true, false);
Onyx::Init(errorHandler);

The first argument is whether to log warnings, the second to log errors, and the third to throw errors. We're going to keep these settings for now.

Error Messages

We've never seen the error handler in action, so let's do it. Let's set the texture of our Santa hat to something that doesn't exist; let's just misspell container:

Onyx::UiRenderable santaHat(mesh, Onyx::Texture::Load(Onyx::Resources("textures/contaner.jpg"), Onyx::TextureWrap::ClampToBorder));

Let's see what happens:
Image 25
Nice! The handler caught the error (and our santa hat should just be a black square).

Error Callbacks

What if we want to handle errors our own way? Well, we can do that! An error callback function, in this context, will be a void method that takes an std::string as its only argument:

// before main()
void errorCallback(std::string msg)
{
    std::cout << "Custom Error Callback: " << msg << "\n";
}

Now, we can simply set the error callback for the handler, and disable normal error logging (set second arg to false):

Onyx::ErrorHandler errorHandler(true, false, false);
errorHandler.setErrorCallback(errorCallback);
Onyx::Init(errorHandler);

Now, when our nonexistent file error occurs, we get this:
Image 26

Obviously, we can do the same exact thing for warning callbacks.

If you're making a game or something, and have an object oriented system, you may encounter a problem if you want to use error callbacks - the callback function has to be static, it cannot be part of a class. Let's say your game is in an Application class. You would make a static void(std::string) function, and set that as the error callback. That's all fine and good, but what if you want to access the application? Well, you can use the Onyx::SetUserPtr() function. In Application, you would call Onyx::SetUserPtr(this). Now, in your static error callback function, you can access your application object with Onyx::GetUserPtr(), you can just cast it to an Application.

That's all!

With all of this information, you should be comfortable enough to start using the library on your own. You could also explore the Documention, if you somehow wouldn't find that boring.

Clone this wiki locally