OpenGL in C# – an object oriented introduction to OpenTK

Vertex Buffer Object

The primary way of rendering anything in OpenGL is using vertices. The easiest way of handling vertices in C# is by creating custom structures for them.

For this example we will use a simple vertex with a position and a colour.

To render our vertices, we will need a vertex buffer object (VBO). You can think of this as a piece of GPU memory that we can copy our vertices to, so that they can be rendered.

Below you can see a very simple implementation wrapping all the functionality we will need.

For reasons of brevity, I will not go into details for each of the used GL calls. Feel free however, to look up the documentation of the corresponding OpenGL function online, or ask questions in the comments, if anything is not clear.

To keep the code short, I am also completely ignoring the need for disposing OpenGL objects we are no longer using.

If you are interested in how to do so, you can take a look at how I implement the IDisposable interface in my library’s VertexBuffer.

Shaders

To render anything, we need shaders; little programs that we can run on the GPU to put our vertices onto the screen.

In our simple case we want to create a vertex shader to transform out vertices into screen space and prepare them for rasterization, and a fragment shader (also known as pixel shader, especially in Direct3D) to assign a colour to each of our rendered fragments. Fragments for us are the same as pixels, and we will drop the distinction for this post.

In OpenGL, we create these two shaders separately and then link them together into one shader program as follows:

Shaders in OpenGL are written in the OpenGL Shading language (GLSL), which looks very much like C.

For this example we will use very simple shaders. Our vertex shader will transform the vertex position by multiplying it with a projection matrix which we will set below.

It will also pass on the colour of the vertex, which will then be used in the fragment shader to colour the pixel.

Vertex Attributes and Vertex Array Objects

In principle we are almost done. However, when we pass our array of vertices to OpenGL, we will do so as a block of bytes, and it will not know how to interpret the data correctly.

To give it that information, specifically, to let it know what attributes our vertices have, and how they are laid out in memory, we will use a vertex array object (VAO).

VAOs are very different from vertex buffers. Despite their name, they do not store vertices. Instead they store information about how to access an array of vertices, which is exactly what we want.

Setting up a VAO is done as follows:

And the VertexAttribute type used is a simple container like this:

For information on how to define vertex attributes, I refer to the documentation of glVertexAttribPointer.

As example, this is how the definition looks for our vertex:

Defining this data – especially since a lot of it is redundant or can be inferred in most cases – can be largely abstracted and simplified (or even done completely automatically).

For an idea of how that can be achieved, feel free to check this helper class I wrote for my own library.

Shader Uniforms

Something else we have to do before we can put everything together is set the value of the projection matrix.

Parameters like matrices, textures, and other user defined values that remain constant for the execution of a shader are stored in uniforms, which can in fact be treated like constants within shader code.

To avoid unpredictable or nonsensical behaviour, they should always be set before rendering as follows:

Attribute and Uniform Locations

Both attributes and uniforms have to be assigned to locations, which are specified by an integer. These locations can be specified in the shader’s source itself.

They can however also be extracted by their name, which I consider a better solution since detecting a wrong attribute name is much easier than finding inconsistencies between location assignments in the shader and the C# source.

To access these locations is easy, and can be done by adding the following two methods to our shader program type:

Note that these GL calls return -1 if a location was not found, which you may want to check for in a production setting. Further, it may be advantageous to keep a local copy of all known locations for easier and faster access.

Pages: 1 2 3 4

Leave a Reply

10 comments

  1. Thanks a lot for writing this! It’s a really nice post and clearly explains the basics of OpenGL. I really want to do some stuff in OpenGL – mainly because XNA feels kinda quirky sometimes – and this is a great kickstart.

    As to other subjects, one thing I am particularly interested in would be gpu particles, and since you used them in RF (pretty extensively to say the least) I figure you’d know enough about them to do a post about it ;)

    Thanks, and keep the good stuff coming!
    Luca

    • Paul Scharf says:

      Thanks Luca!

      I will definitely be posting about those particles and lots of other topics.

      There is already a rough write-up regarding the particles on the Roche Fusion devlog: http://rochefusion.com/devlog/239/from-thousands-to-millions-of-particles

      But I will definitely do a more low-level post with an example in the future!

      Thanks again, and be sure to let me know how it goes if you look into OpenGL.

      • Yay triangles! :D http://i.imgur.com/IhaM4mR.png
        I figured it’d be best to simply follow the turotial step by step, and that went quite well I think.

        I did find myself looking for methods that weren’t defined yet though. For example with the ShaderProgram, you add the GetAttributeLocation and GetUniformLocation methods after you use them. Not too big a deal however, and I figured it out quite quickly ;) It also really helped to have the complete example project and your library on GitHub for reference or looking something up. It’s nice that by following this you not only teach the basics but create a simple framework while doing so, which is minimalistic yet really useful.

        It is interesting how much more freedom you have in comparison to XNA :O even though that also means you have to do a bit more work. However once you have a basic framework (like this) set up that helps a lot. It takes quite some getting used too, but I’ll probably spend most of my upcoming spare time on OpenGL now ^^.

        You got me hooked :) Thanks again!

        PS: I also read the particle post, really interesting stuff! And the GPU particle system doesn’t look as complicated as I feared :P Still, a full post or even tutorial on that would be cool :)

  2. ozzyM says:

    Perfect. I have been looking for OpenGL + C# + GLSL since many months and finally. Thank you. I will just follow each and every. my #version 140 is it ok? . I haven’t tried your code so far but I will do. Hope you will be creating from the basic and with the same Good Explanation and I have just started learning this, hope you will stay with us. Thank you.

    • Paul Scharf says:

      Hey,

      I’m glad to hear this post is useful for you!

      #version 140 is for GLSL version 1.4, which comes with OpenGL 3.1 (very confusing version combinations!)
      That should work fine for most things, but there is very little hardware that supports exactly OpenGL 3.1
      Older (very old really) hardware tends to go up to 2.1, and almost everything else at least supports 3.2, or newer.

      So I usually use #version 150 which comes with OpenGL 3.2
      That also works better on non-windows platforms (especially Mac OSX) which can be more picky about what OpenGL version you try to use.

      Hope that answers your question.

      In either case, let me know if you have any other questions! :)

  3. You don’t seem to talk about the potential pitfalls or challenges related to determining the size of a buffer and the offsets of data in it. You have the size of a ColouredVertex hard-coded at (3+4)*4, but doesn’t this assume that the compiler is aligning members of the struct in a particular way? If you attempt to use sizeof(ColouredVertex) you’ll get a compiler error that talks about why the size of this structure may not be constant. Isn’t this a potential problem?

    • Paul Scharf says:

      Hey Benjamin,

      You’re right that getting the struct size right is very important. If something goes wrong with that, you get lots of crashes in the worst, and something like the following in the best case:

      I did not go over that here to keep things relatively brief, though.

      As you say, their is no guarantee for the compiler to actually use what seems the most obvious memory layout, unless you force it to do so, using StructLayoutAttribute (though .NET seems to do so in most cases, while Mono is somewhat less predictable). For example [StructLayout(LayoutKind.Sequential, Pack = 1)] would always result in the ‘obvious’ memory layout.

      Regarding the size, we also do not have to rely on hardcoding. While sizeof() indeed does not work for structs, we can use System.Runtime.InteropServices.Marshal.SizeOf(typeof(TVertex)) to get the size at runtime.

      So as you rightly say, there a bunch of things here we can do wrong, but these are by no means problems we cannot work around (and relatively easily so).

      Hope that answers your questions! :)

  4. Dapo Olaleye says:

    I’m a little lost as to how i could use any of this as to generate a texture or sprite. Heck even a movable object (Vector).

    Though, i only say any of this because i am very new to C# and OpenTK. I moved from ActionScript 2 to AS3 to Haxe and now i’m trying to go for the “Real” programming languages and i’m kinda lost here and there when it comes to just making a movable object (Like a movie clip with some texture or what-not).

    Sorry, guess i’m just really a noob when it comes to things way outside my comfort zone and this is the first post i’ve seen on this website during my quest of trying to find a OpenTK tutorial that doesn’t rely on immediate mode (Since i heard it’s for all intents and purposes: slower, limiting and basically bad in a sense and i didn’t want to rely on unity for… various reasons) but… isn’t there at least some way to translate all this into generating a sort of content pipe for vector and bitmap graphics or any form of texture and object other than just triangles (which apparently seems to be the most popular shape to make in most tutorials i’ve seen).

    Sorry if i’m rambling on, just trying to figure out how to basically generate a texture using a png or jpg image at least and movable and rotatable object with all this since i’m really new to OpenTK :)