Rendering to Textures with Framebuffers

Table of Content
- The Problem with Duplicate Vertices
- Off-Screen Rendering
- Framebuffer Object (FBO)
- Creating a Framebuffer
- Practical Example: A Two-Pass Grayscale Effect
- Summary
...
In this article, we'll dive into rendering to textures with framebuffers — the foundational technique for post-processing effects. The canvas for these effects is often a simple, full-screen quad. We'll use the one we created in the previous article. While the implementation works, it contains some redundant data and can be optimized. Let's address the optimization and then move on to rendering to textures.
The Problem with Duplicate Vertices
In the previous version, the full-screen quad was defined like this:
javascript1const positions = new Float32Array([2 -1, -1, // Bottom left3 1, -1, // Bottom right4 -1, 1, // Top left5 -1, 1, // Top left (duplicate!)6 1, -1, // Bottom right (duplicate!)7 1, 1, // Top right8]);
This array contains 6 vertices (12 floating-point values; we'll refer to them as floats). The vertex at index 3 is a duplicate of the one at index 2, and the vertex at index 4 duplicates the one at index 1.
The Solution: Index Buffers
By using an index buffer (aka an element array buffer), redundant vertex data can be eliminated. An index buffer holds integer references into the vertex array, allowing each unique vertex to be stored exactly once while still defining arbitrary primitives. The index buffer acts as a set of instructions that tells the GPU: To draw the first triangle, connect vertices 0, 1, and 2; for the second triangle, connect vertices 2, 1, and 3. This approach allows the GPU to reuse vertex data efficiently.
javascript1// Only 4 unique vertices (8 floats)2const positions = new Float32Array([3 -1, -1, // 0: Bottom left4 1, -1, // 1: Bottom right5 -1, 1, // 2: Top left6 1, 1, // 3: Top right7]);89// Index buffer: each group of three indices defines one triangle10const indices = new Uint16Array([11 0, 1, 2, // Triangle A: vertices 0 → 1 → 212 2, 1, 3, // Triangle B: vertices 2 → 1 → 313]);
Visual Representation
The diagram shows how the quad is split into two triangles: the blue triangle connects vertices 0, 1, and 2, and the green triangle uses vertices 2, 1, and 3. Both triangles share the diagonal edge (the dashed line) and reuse vertices 1 and 2. This vertex reuse is the key to indexed drawing's efficiency.
Index buffers reduce memory usage from 12 floats to 8 floats (33% reduction) and reduce the amount of data transferred from the CPU to the GPU. Shared vertices can be cached and reused by the GPU's vertex processing pipeline, avoiding redundant transformations. For a simple quad, the performance impact is negligible. For complex meshes where vertex sharing is common, index buffers significantly reduce memory footprint and improve rendering performance.
Implementation
To use indexed drawing, we need to create an additional buffer to store the indices. This buffer tells WebGL which vertices to use for each triangle. The process involves binding it as an element array buffer, uploading the index data, and then calling drawElements
with the appropriate parameters:
javascript1const indexBuffer = gl.createBuffer();2gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);3gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indices, gl.STATIC_DRAW);45gl.drawElements(6 gl.TRIANGLES, // Primitive type7 6, // Number of indices to draw8 gl.UNSIGNED_SHORT, // Data type of indices (16-bit)9 0 // Byte offset in the index buffer10);
With indexed drawing in place, our full-screen quad now uses only the necessary vertex data without redundancy.
When to Use Indexed Drawing
- Always for meshes with shared vertices (most 3D models)
- Sometimes for 2D shapes (like our quad)
- Rarely for particle systems or other cases where vertices are unique
Using indexed drawing is essential for complex geometry. It improves efficiency, but doesn't change the core limitation: rendering still targets the screen directly.
Off-Screen Rendering
Rendering directly to the screen limits us to single-pass effects. Effects like blur require reading from previously rendered pixels while generating new ones. If the scene renders directly to the screen, those pixels can't be re-read to perform the blur calculation in the same pass. To implement blur, we need to first render the scene to an intermediate texture we can read from — this technique is called off-screen rendering.
This is the core idea behind almost every modern visual effect, from blur to real-time fluid simulation. Off-screen rendering solves this limitation by using Framebuffer Objects (FBOs) to render to textures instead of the screen, enabling us to chain multiple rendering passes together.
By rendering off-screen, we change the workflow:
- Pass 1: Render the entire 3D scene, not to the screen, but to a texture.
- Pass 2: Render a simple full-screen quad to the screen. In its fragment shader, you can now "read" from the texture generated in Pass 1, sample it multiple times, and average the results to create a blur.
This multi-pass approach is essential for many GPGPU (General-Purpose computing on GPUs) applications, including the fluid simulation we're building toward.
What is a Framebuffer Object (FBO)?
A Framebuffer Object is a WebGL object that serves as an alternative rendering destination. By default, WebGL draws to the canvas's default framebuffer, which displays directly in the browser. An FBO is an object that holds references to textures and renderbuffers, allowing rendering operations to write to these textures instead of the screen.
A framebuffer consists of attachment points that receive different outputs from the rendering pipeline. The primary attachment is COLOR_ATTACHMENT0
, which receives the RGBA color output from fragment shaders. Additional attachments can include depth buffers, stencil buffers, and multiple color attachments for advanced techniques.
Creating a Framebuffer
Creating a working FBO involves a few steps: creating the destination texture, creating the framebuffer itself, and attaching the two together:
javascript1/**2 * Creates a texture and a framebuffer to render into it.3 * @param {WebGL2RenderingContext} gl The WebGL2 context.4 * @returns {{texture: WebGLTexture, framebuffer: WebGLFramebuffer}}5 */6function createFramebuffer(gl) {7 // 1. Create the texture to render into8 const texture = gl.createTexture();9 gl.bindTexture(gl.TEXTURE_2D, texture);10 gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);11 gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);12 gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);13 // Allocate storage for the texture. We'll resize it later.14 gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);1516 // 2. Create the framebuffer17 const framebuffer = gl.createFramebuffer();18 gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);1920 // 3. Attach the texture to the framebuffer's color attachment point21 gl.framebufferTexture2D(22 gl.FRAMEBUFFER, // Target23 gl.COLOR_ATTACHMENT0, // Attachment point24 gl.TEXTURE_2D, // Texture target25 texture, // The texture to attach26 0 // Mipmap level27 );2829 // 4. Check if the framebuffer is complete30 const status = gl.checkFramebufferStatus(gl.FRAMEBUFFER);31 if (status !== gl.FRAMEBUFFER_COMPLETE) {32 throw new Error(`Framebuffer is not complete: ${status}`);33 }3435 // Unbind to be tidy36 gl.bindTexture(gl.TEXTURE_2D, null);37 gl.bindFramebuffer(gl.FRAMEBUFFER, null);3839 return { texture, framebuffer };40}
The key operation here is gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer)
, which redirects all subsequent rendering commands to the FBO instead of the screen. The texture attachment becomes our rendering target, effectively creating a virtual canvas. The framebuffer completeness check ensures everything is properly configured — a framebuffer can be incomplete if attachments have mismatched dimensions or unsupported formats.
Finally, binding null to the framebuffer returns rendering to the default framebuffer (the screen). This is important because once you've bound a custom framebuffer, all rendering continues to go to its texture until you explicitly unbind it.
Example: A Two-Pass Grayscale Effect
Let's apply these concepts by implementing a grayscale post-processing effect. We will perform two rendering passes:
- Pass 1: Draw our colorful UV gradient from the last article into our off-screen FBO.
- Pass 2: Draw a full-screen quad to the canvas, but use the texture from Pass 1 as an input and convert it to grayscale.
Shaders
We need two sets of shaders. The first renders our "scene" (the UV gradient), and the second applies the post-processing effect.
glsl1// Renders our initial scene. Same as the previous article.2const sceneVertSrc = `#version 300 es3 layout(location = 0) in vec2 aPosition;4 out vec2 vUV;5 void main() {6 vUV = aPosition * 0.5 + 0.5;7 gl_Position = vec4(aPosition, 0.0, 1.0);8 }`;910const sceneFragSrc = `#version 300 es11 precision highp float;12 in vec2 vUV;13 out vec4 outColor;14 void main() {15 outColor = vec4(vUV, 0.5, 1.0); // Colorful gradient16 }`;1718// Applies the grayscale effect by reading from a texture.19const postFxVertSrc = sceneVertSrc; // We can reuse the same vertex shader2021const postFxFragSrc = `#version 300 es22 precision highp float;23 in vec2 vUV;24 out vec4 outColor;25 uniform sampler2D uSceneTexture; // Our off-screen texture2627 void main() {28 vec3 sceneColor = texture(uSceneTexture, vUV).rgb;29 // Simple grayscale conversion using luminance formula30 float grayscale = dot(sceneColor, vec3(0.299, 0.587, 0.114));31 outColor = vec4(vec3(grayscale), 1.0);32 }`;
JavaScript Render Loop
The main logic happens in our drawing function. We need to set up both shader programs and the FBO, then orchestrate the two rendering passes.
javascript1// --- In your setup code ---2const sceneProgram = createProgram(gl, sceneVertSrc, sceneFragSrc);3const postFxProgram = createProgram(gl, postFxVertSrc, postFxFragSrc);45const { texture: sceneTexture, framebuffer } = createFramebuffer(gl);67// createFullScreenQuad from the previous article, now optimized with indexed drawing8const quadVAO = createFullScreenQuad(gl);910// --- In your render loop ---11function render() {12 // Check if we need to resize the canvas and our framebuffer texture13 if (gl.canvas.width !== gl.canvas.clientWidth || gl.canvas.height !== gl.canvas.clientHeight) {14 gl.canvas.width = gl.canvas.clientWidth;15 gl.canvas.height = gl.canvas.clientHeight;1617 // Resize the texture's storage18 gl.bindTexture(gl.TEXTURE_2D, sceneTexture);19 gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.canvas.width, gl.canvas.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);20 }2122 // --- PASS 1: Render scene to the framebuffer ---2324 // Bind the FBO as the render target25 gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);2627 // Set the viewport to the texture's size28 gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);2930 // Render the scene31 gl.useProgram(sceneProgram);32 gl.bindVertexArray(quadVAO);33 gl.drawElements(gl.TRIANGLES, 6, gl.UNSIGNED_SHORT, 0);3435 // --- PASS 2: Render to the screen with a post-processing effect ---3637 // Unbind the FBO to render to the canvas38 gl.bindFramebuffer(gl.FRAMEBUFFER, null);3940 // Set the viewport to the canvas's size41 gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);4243 // Use the post-fx shader and provide the scene texture44 gl.useProgram(postFxProgram);45 gl.activeTexture(gl.TEXTURE0);46 gl.bindTexture(gl.TEXTURE_2D, sceneTexture);47 gl.uniform1i(gl.getUniformLocation(postFxProgram, 'uSceneTexture'), 0);4849 // Render the quad50 gl.bindVertexArray(quadVAO);51 gl.drawElements(gl.TRIANGLES, 6, gl.UNSIGNED_SHORT, 0);5253 requestAnimationFrame(render);54}5556render();
Running this code won't show a colorful gradient. Instead, you'll see its grayscale version — proof that we successfully rendered to a texture in the first pass, then read from that texture in the second pass to apply our post-processing effect.
You can explore the full example on GitHub and see a live demo here. The demo includes a split-screen comparison to help visualize the effect in action.
How would a blur effect work?
The grayscale shader reads from the scene texture once for every pixel. To create a blur, the idea is very similar, but for each pixel, we would sample the uSceneTexture
multiple times — once at the pixel's own location, and several more times in a small radius around it. We would then average all those color samples together. This averaging is what creates a box blur - the blur technique where all samples are weighted equally.
We're sticking to the simpler grayscale effect here to keep the focus on the framebuffer setup itself.
Summary
Framebuffer objects are one of the most powerful features of modern graphics APIs. By mastering them, you are no longer limited to a single rendering pass.
- FBOs act as virtual screens, allowing you to render into textures.
- Binding an FBO redirects all drawing commands to its attached textures.
- Binding
null
switches the rendering target back to the default framebuffer. - This multi-pass technique is the foundation for post-processing, deferred rendering, and complex GPGPU simulations.
We have prepared a certain foundation for the further implementation of the fluid simulation effect. We can now save the state of a calculation to a texture. But what if we want to create a feedback loop where we continuously read from a texture, compute a new result, and write it back? To do that, we'll need one more trick.
In the next article, we will explore the Ping-Pong Technique to create iterative feedback loops, which will allow us to advect dye and velocity for our fluid simulation.
This is part of my series on implementing interactive 3D visualizations with WebGL 2.