Olha Stefanishyna
← Back to home

Rendering to Textures with Framebuffers

Сolorful gradient on the left and a grayscale version on the right, demonstrating a post-processing effect.
Сolorful gradient on the left and a grayscale version on the right, demonstrating a post-processing effect.

Table of Content

...

In this article, we'll dive into rendering to textures with framebuffers — the foundational technique for post-processing effects. The canvas for these effects is often a simple, full-screen quad. We'll use the one we created in the previous article. While the implementation works, it contains some redundant data and can be optimized. Let's address the optimization and then move on to rendering to textures.

The Problem with Duplicate Vertices

In the previous version, the full-screen quad was defined like this:

javascript
1const positions = new Float32Array([
2 -1, -1, // Bottom left
3 1, -1, // Bottom right
4 -1, 1, // Top left
5 -1, 1, // Top left (duplicate!)
6 1, -1, // Bottom right (duplicate!)
7 1, 1, // Top right
8]);

This array contains 6 vertices (12 floating-point values; we'll refer to them as floats). The vertex at index 3 is a duplicate of the one at index 2, and the vertex at index 4 duplicates the one at index 1.

The Solution: Index Buffers

By using an index buffer (aka an element array buffer), redundant vertex data can be eliminated. An index buffer holds integer references into the vertex array, allowing each unique vertex to be stored exactly once while still defining arbitrary primitives. The index buffer acts as a set of instructions that tells the GPU: To draw the first triangle, connect vertices 0, 1, and 2; for the second triangle, connect vertices 2, 1, and 3. This approach allows the GPU to reuse vertex data efficiently.

javascript
1// Only 4 unique vertices (8 floats)
2const positions = new Float32Array([
3 -1, -1, // 0: Bottom left
4 1, -1, // 1: Bottom right
5 -1, 1, // 2: Top left
6 1, 1, // 3: Top right
7]);
8
9// Index buffer: each group of three indices defines one triangle
10const indices = new Uint16Array([
11 0, 1, 2, // Triangle A: vertices 0 → 1 → 2
12 2, 1, 3, // Triangle B: vertices 2 → 1 → 3
13]);

Visual Representation

WebGL quad decomposed into two triangles with shared vertices for indexed drawing
WebGL quad decomposed into two triangles with shared vertices for indexed drawing

The diagram shows how the quad is split into two triangles: the blue triangle connects vertices 0, 1, and 2, and the green triangle uses vertices 2, 1, and 3. Both triangles share the diagonal edge (the dashed line) and reuse vertices 1 and 2. This vertex reuse is the key to indexed drawing's efficiency.

Index buffers reduce memory usage from 12 floats to 8 floats (33% reduction) and reduce the amount of data transferred from the CPU to the GPU. Shared vertices can be cached and reused by the GPU's vertex processing pipeline, avoiding redundant transformations. For a simple quad, the performance impact is negligible. For complex meshes where vertex sharing is common, index buffers significantly reduce memory footprint and improve rendering performance.

Implementation

To use indexed drawing, we need to create an additional buffer to store the indices. This buffer tells WebGL which vertices to use for each triangle. The process involves binding it as an element array buffer, uploading the index data, and then calling drawElements with the appropriate parameters:

javascript
1const indexBuffer = gl.createBuffer();
2gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
3gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indices, gl.STATIC_DRAW);
4
5gl.drawElements(
6 gl.TRIANGLES, // Primitive type
7 6, // Number of indices to draw
8 gl.UNSIGNED_SHORT, // Data type of indices (16-bit)
9 0 // Byte offset in the index buffer
10);

With indexed drawing in place, our full-screen quad now uses only the necessary vertex data without redundancy.

When to Use Indexed Drawing

  • Always for meshes with shared vertices (most 3D models)
  • Sometimes for 2D shapes (like our quad)
  • Rarely for particle systems or other cases where vertices are unique

Using indexed drawing is essential for complex geometry. It improves efficiency, but doesn't change the core limitation: rendering still targets the screen directly.

Off-Screen Rendering

Rendering directly to the screen limits us to single-pass effects. Effects like blur require reading from previously rendered pixels while generating new ones. If the scene renders directly to the screen, those pixels can't be re-read to perform the blur calculation in the same pass. To implement blur, we need to first render the scene to an intermediate texture we can read from — this technique is called off-screen rendering.

This is the core idea behind almost every modern visual effect, from blur to real-time fluid simulation. Off-screen rendering solves this limitation by using Framebuffer Objects (FBOs) to render to textures instead of the screen, enabling us to chain multiple rendering passes together.

By rendering off-screen, we change the workflow:

  • Pass 1: Render the entire 3D scene, not to the screen, but to a texture.
  • Pass 2: Render a simple full-screen quad to the screen. In its fragment shader, you can now "read" from the texture generated in Pass 1, sample it multiple times, and average the results to create a blur.

This multi-pass approach is essential for many GPGPU (General-Purpose computing on GPUs) applications, including the fluid simulation we're building toward.

What is a Framebuffer Object (FBO)?

A Framebuffer Object is a WebGL object that serves as an alternative rendering destination. By default, WebGL draws to the canvas's default framebuffer, which displays directly in the browser. An FBO is an object that holds references to textures and renderbuffers, allowing rendering operations to write to these textures instead of the screen.

A framebuffer consists of attachment points that receive different outputs from the rendering pipeline. The primary attachment is COLOR_ATTACHMENT0, which receives the RGBA color output from fragment shaders. Additional attachments can include depth buffers, stencil buffers, and multiple color attachments for advanced techniques.

Creating a Framebuffer

Creating a working FBO involves a few steps: creating the destination texture, creating the framebuffer itself, and attaching the two together:

javascript
1/**
2 * Creates a texture and a framebuffer to render into it.
3 * @param {WebGL2RenderingContext} gl The WebGL2 context.
4 * @returns {{texture: WebGLTexture, framebuffer: WebGLFramebuffer}}
5 */
6function createFramebuffer(gl) {
7 // 1. Create the texture to render into
8 const texture = gl.createTexture();
9 gl.bindTexture(gl.TEXTURE_2D, texture);
10 gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
11 gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
12 gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
13 // Allocate storage for the texture. We'll resize it later.
14 gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
15
16 // 2. Create the framebuffer
17 const framebuffer = gl.createFramebuffer();
18 gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
19
20 // 3. Attach the texture to the framebuffer's color attachment point
21 gl.framebufferTexture2D(
22 gl.FRAMEBUFFER, // Target
23 gl.COLOR_ATTACHMENT0, // Attachment point
24 gl.TEXTURE_2D, // Texture target
25 texture, // The texture to attach
26 0 // Mipmap level
27 );
28
29 // 4. Check if the framebuffer is complete
30 const status = gl.checkFramebufferStatus(gl.FRAMEBUFFER);
31 if (status !== gl.FRAMEBUFFER_COMPLETE) {
32 throw new Error(`Framebuffer is not complete: ${status}`);
33 }
34
35 // Unbind to be tidy
36 gl.bindTexture(gl.TEXTURE_2D, null);
37 gl.bindFramebuffer(gl.FRAMEBUFFER, null);
38
39 return { texture, framebuffer };
40}

The key operation here is gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer), which redirects all subsequent rendering commands to the FBO instead of the screen. The texture attachment becomes our rendering target, effectively creating a virtual canvas. The framebuffer completeness check ensures everything is properly configured — a framebuffer can be incomplete if attachments have mismatched dimensions or unsupported formats. Finally, binding null to the framebuffer returns rendering to the default framebuffer (the screen). This is important because once you've bound a custom framebuffer, all rendering continues to go to its texture until you explicitly unbind it.

Example: A Two-Pass Grayscale Effect

Let's apply these concepts by implementing a grayscale post-processing effect. We will perform two rendering passes:

  • Pass 1: Draw our colorful UV gradient from the last article into our off-screen FBO.
  • Pass 2: Draw a full-screen quad to the canvas, but use the texture from Pass 1 as an input and convert it to grayscale.

Shaders

We need two sets of shaders. The first renders our "scene" (the UV gradient), and the second applies the post-processing effect.

glsl
1// Renders our initial scene. Same as the previous article.
2const sceneVertSrc = `#version 300 es
3 layout(location = 0) in vec2 aPosition;
4 out vec2 vUV;
5 void main() {
6 vUV = aPosition * 0.5 + 0.5;
7 gl_Position = vec4(aPosition, 0.0, 1.0);
8 }`;
9
10const sceneFragSrc = `#version 300 es
11 precision highp float;
12 in vec2 vUV;
13 out vec4 outColor;
14 void main() {
15 outColor = vec4(vUV, 0.5, 1.0); // Colorful gradient
16 }`;
17
18// Applies the grayscale effect by reading from a texture.
19const postFxVertSrc = sceneVertSrc; // We can reuse the same vertex shader
20
21const postFxFragSrc = `#version 300 es
22 precision highp float;
23 in vec2 vUV;
24 out vec4 outColor;
25 uniform sampler2D uSceneTexture; // Our off-screen texture
26
27 void main() {
28 vec3 sceneColor = texture(uSceneTexture, vUV).rgb;
29 // Simple grayscale conversion using luminance formula
30 float grayscale = dot(sceneColor, vec3(0.299, 0.587, 0.114));
31 outColor = vec4(vec3(grayscale), 1.0);
32 }`;

JavaScript Render Loop

The main logic happens in our drawing function. We need to set up both shader programs and the FBO, then orchestrate the two rendering passes.

javascript
1// --- In your setup code ---
2const sceneProgram = createProgram(gl, sceneVertSrc, sceneFragSrc);
3const postFxProgram = createProgram(gl, postFxVertSrc, postFxFragSrc);
4
5const { texture: sceneTexture, framebuffer } = createFramebuffer(gl);
6
7// createFullScreenQuad from the previous article, now optimized with indexed drawing
8const quadVAO = createFullScreenQuad(gl);
9
10// --- In your render loop ---
11function render() {
12 // Check if we need to resize the canvas and our framebuffer texture
13 if (gl.canvas.width !== gl.canvas.clientWidth || gl.canvas.height !== gl.canvas.clientHeight) {
14 gl.canvas.width = gl.canvas.clientWidth;
15 gl.canvas.height = gl.canvas.clientHeight;
16
17 // Resize the texture's storage
18 gl.bindTexture(gl.TEXTURE_2D, sceneTexture);
19 gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.canvas.width, gl.canvas.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
20 }
21
22 // --- PASS 1: Render scene to the framebuffer ---
23
24 // Bind the FBO as the render target
25 gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
26
27 // Set the viewport to the texture's size
28 gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
29
30 // Render the scene
31 gl.useProgram(sceneProgram);
32 gl.bindVertexArray(quadVAO);
33 gl.drawElements(gl.TRIANGLES, 6, gl.UNSIGNED_SHORT, 0);
34
35 // --- PASS 2: Render to the screen with a post-processing effect ---
36
37 // Unbind the FBO to render to the canvas
38 gl.bindFramebuffer(gl.FRAMEBUFFER, null);
39
40 // Set the viewport to the canvas's size
41 gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
42
43 // Use the post-fx shader and provide the scene texture
44 gl.useProgram(postFxProgram);
45 gl.activeTexture(gl.TEXTURE0);
46 gl.bindTexture(gl.TEXTURE_2D, sceneTexture);
47 gl.uniform1i(gl.getUniformLocation(postFxProgram, 'uSceneTexture'), 0);
48
49 // Render the quad
50 gl.bindVertexArray(quadVAO);
51 gl.drawElements(gl.TRIANGLES, 6, gl.UNSIGNED_SHORT, 0);
52
53 requestAnimationFrame(render);
54}
55
56render();

Running this code won't show a colorful gradient. Instead, you'll see its grayscale version — proof that we successfully rendered to a texture in the first pass, then read from that texture in the second pass to apply our post-processing effect.

You can explore the full example on GitHub and see a live demo here. The demo includes a split-screen comparison to help visualize the effect in action.

How would a blur effect work?

The grayscale shader reads from the scene texture once for every pixel. To create a blur, the idea is very similar, but for each pixel, we would sample the uSceneTexture multiple times — once at the pixel's own location, and several more times in a small radius around it. We would then average all those color samples together. This averaging is what creates a box blur - the blur technique where all samples are weighted equally.

We're sticking to the simpler grayscale effect here to keep the focus on the framebuffer setup itself.

Summary

Framebuffer objects are one of the most powerful features of modern graphics APIs. By mastering them, you are no longer limited to a single rendering pass.

  • FBOs act as virtual screens, allowing you to render into textures.
  • Binding an FBO redirects all drawing commands to its attached textures.
  • Binding null switches the rendering target back to the default framebuffer.
  • This multi-pass technique is the foundation for post-processing, deferred rendering, and complex GPGPU simulations.

We have prepared a certain foundation for the further implementation of the fluid simulation effect. We can now save the state of a calculation to a texture. But what if we want to create a feedback loop where we continuously read from a texture, compute a new result, and write it back? To do that, we'll need one more trick.

In the next article, we will explore the Ping-Pong Technique to create iterative feedback loops, which will allow us to advect dye and velocity for our fluid simulation.


This is part of my series on implementing interactive 3D visualizations with WebGL 2.

Let's talk