Hi there! I’m Jack and I exist exclusively here.
This walkdown (somewhere between a walkthrough and a rundown) is about how I handled loading textures to the GPU using Vulkan in my Rust game engine from scratch!
If you’ve never done any graphics or Rust before, have no fear, but also, don’t read this article. If you’ve would like to get into Vulkan/game engine programming and like Rust, check out this excellent guide and then come back after you’ve drawn a textured quad. Otherwise, follow me!
Here’s the Repo with all of the code we’re going to write today put together. This Repo isn’t even close to a complete Vulkan pipeline, and it isn’t even sufficient as a library. Just use it as education. The optimal reader of this article just googled “drawing textures gfx_hal rust” and found this article. If you’re that person, you’re in the right place!
Everything we need in an “image” we’re going to slap into a single struct. We won’t use all these fields here, but the reason to keep them bundled is simple — we don’t want Rust to automatically deconstruct any of them, so we need to haul them around. I call this “loaded image”…
LoadedImage. It's also a badass name 😎.
ManuallyDrop wrappers, which allows us to pass a function to free this memory, since they really represent assets on the GPU.
First things first, let’s be good C-citizens (when we’re this
unsafe in Rust, we're not far from just writing C) and add our destructor:
It’s difficult to force the compiler to force us to use the
manually_drop method when we make an image -- normally that's a thing Rust handles easily, but we've basically "turned that off" by using
ManuallyDrop, so I guess we'll just have to use our dumb brains to remember to do it (fun fact -- while writing this article, I forgot to include the section about dropping some memory, ironically showing why this kind of memory management can be error prone).
manual_drop, by the way, is a simple convenience macro because I got tired of writing this all the time:
Okay, now we’re onto the good bits! Let’s walk through what actually making an image looks like.
Notice that we pass in our
filter. We only have two options as far as I can tell --
Nearest. Simple choice, really -- if you're doing pixel art, do
Nearest, otherwise, do
Linear. In your game, you might feel free to hardcode this.
There’s two other things to point out here of note:
height. These refer to the texel size of the image. If you've never heard the term
texel, bless your heart, because it's a terrible word. A
texel is to a texture like a
pixel is to a...
picture....which is what a
texture is...oh no!
Okay, so what is a
texture? For us, we're using a very simple definition (mipmaps complicate this!): a texture is a 2D grid of
colors, and a color is 4
u8s in a row, forming an
RGBA image (u8’s can represent 256 numbers, which is why colors are 0–255!).
Here’s an example of a texture written out…
And here is it in picture form:
So when we say the
height of a texture, we're really asking about this grid.
Making the Actual Image Object
This whole section is largely boilerplate, but let’s run through it quickly.
First, we say, “Hey, GPU, make me an image please” and it says “sure, here ya go”:
We’ll also need to find the requirements for how much memory the GPU is going to need. This ultimately is up to the GPU to tell us, since GPUs might pad memory differently, but it’s going to be in the ballpark of
width * height * 4, which reflects the
u8s we wrote out above.
Next, we’re going to get that memory requirement, ask the GPU to allocate that memory, and then we bind that memory to our image object. I’m not exactly sure what
bind means in this Vulkan context for the GPU, but I assume this is essentially giving our
image object on the GPU side a pointer to its memory. The code to do that looks like this:
Next, we make our image_view and our sampler. It’s difficult for me to get into too much detail, as these things are bound to your
descriptor_sets which come from the
DescriptorPool you'll create in your
PipelineLayout, but for me, a simple 2D man with a simple 2D game, it looks like this:
And finally, we create our
LoadedImage like this:
Okay! So now we have a
LoadedImage. You'll notice we bound it to a
mut texture before we returned it out of its constructor, and that's because we're not done yet. It's time to actually edit the image so it looks like what we want.
To edit any image, we need to create a buffer, which we’ll fill with our colors, turning it into a flat representation of that grid which we wrote out above, and then we need to put that buffer in our pipeline to send into our image!
Create our Staging Buffer
First, we’re going to need to do some pointer funtime math! Here’s what we’re going to need to do:
BufferBundle::new function? It's exactly like how we made an image object, but just slightly tweaked to be about buffers instead of images.
BufferBundle looks like this, just to keep it all out there:
It also has a
manually_drop method, like the
LoadedImage before it:
Now here’s the real meat of the problem — we need to write the stream of image data we have to the buffer. This code is dense, so read over it a few times for clarification. For me, grabbing a piece of paper and doing it myself gave me a good feel, but basically, we’re trying to convert a flat array to grid, copying each row at a time to the GPU. When we send it to the GPU, we’ll tell it how long each row is, which the GPU will use to re-assemble the grid the later.
And with that, our
staging_buffer is good to go! We need one last piece of data, and that's simple:
I have this all bound in as as a function which returns a tuple of
(BufferBundle, u32), which is good enough. See the linked repository for more.
Uploading Our Buffer to the GPU
Okay, so when you want to upload data to the GPU, you need two things:
- The data you want to operate on in some sort of buffer. We just made ours when we made our “staging buffer” and prepared it with our image data.
- A “command buffer” which is just another buffer that you upload to the GPU which has references to the buffer(s) you want to operate on, and…well…commands to the GPU, as to what to do with those buffers.
To make our command buffer, we ask the GPU for one out of our
CommandPool, which we make in our
Pipeline creation (see the learn gfx_hal tutorials above for that!):
Our image is in some
undefined state right now (as in, I personally don't know what state it's in!), so we'll need to transfer it to a state where we can write to it. We do this with a barrier, and we create on like this:
Next, we do what we actually want to be doing here, which is copying the buffer over! We do it like this:
Important note here: if you instead want to make a dynamic texture (which I may cover in a brief addendum in the future), where you edit a part of a texture after creating it, you can easily do that by making
height only a section of the image, and then specify some
offset into the image. You can also just re-edit the entire texture at once, but that's awfully wasteful!
Now, we need to transition our
image back to being in the state of
SHADER_READ and the layout of
ShaderReadOnlyOptimal. We do that with...you guessed it, another barrier, like so:
And now we’re done adding to our
cmd_buffer. We'll have to submit it to the GPU to actually do all those commands, but before we do that, we make a
fence. For those who don't know, a
fence, in Vulkan speak, is similar to a semaphore, but a fence is used between the CPU and the GPU and a semaphore is used between different parts of the CPU. (Check the Vulkan docs for a better explanation of the difference -- in practice, sometimes Vulkan wants a fence, sometimes it wants a semaphore. I just do what the specs tell me to do).
As always, we need to do our cleanup here too! First, we wait on our fence to make sure that our command buffer has finished being uploaded to the GPU, and then we free it and destroy the fence. Afterwards, we cleanup everything else.
And, with that, we are done!
Let’s take a step back and let’s see how this code look in our wider program.
I made a wrapper function called
register_texture which requires my
RendererComponent, which is where my
device live, and an
RgbaImage. This is a struct provided by the image crate. In the repository with all this code, I’ve just mocked this up, because otherwise you’re going to have to look at all five thousand lines or so of Vulkan rendering code, and I don’t think anyone wants that.
The function looks like this:
That looks pretty good to me!
Thanks so much for joining me on this walkdown through loading a texture in Vulkan using
gfx_hal. I hope this has been useful to you!
You can always find me here where I exist perpetually.