Olha Stefanishyna
← Back to home

Fluid Simulation in WebGL: The Advection Step

An image showing a vortex velocity field advecting an initial patch of color.
An image showing a vortex velocity field advecting an initial patch of color.

Table of Contents

The previous article introduced the ping-pong technique, a critical pattern for creating iterative, stateful simulations on the GPU. It showed how to feed the result of one frame back into the next.

It’s time to build one of the fundamental components of a fluid simulation: advection. Advection is the process that occurs when things are moved around, and the shader built in this article will be the workhorse of the simulation.

Essentials of Advection

Advection is the process of moving a substance by a flow.

A crucial component in understanding how the process works is a velocity field — a mathematical function that assigns a 2D velocity vector to each point in the simulation area, describing local flow direction and speed.

In fluid simulation, advection describes how a velocity field moves quantities such as density, temperature, or dye around. Imagine dropping a spot of ink into a river; the current of the river (the velocity field) carries the ink (the quantity) downstream. The process of the ink moving with the water is advection.

The advection process depends on two textures being supplied:

  • Quantity Texture: A texture where each pixel stores a value we want to move, such as dye concentration, temperature, or even velocity itself.
  • Velocity Texture: A texture where each pixel stores a 2D velocity vector representing the direction and speed of the flow at that point. It is a GPU representation of a velocity field.

The quantity texture (covered in the previous ping-pong article) gets updated during the advection step based on the data stored in the velocity texture.

Velocity Texture

The velocity texture is used as input to the advection shader. Since textures store color data in separate Red, Green, Blue, and Alpha channels, we repurpose the Red channel to store horizontal velocity and the Green channel to store vertical velocity for each pixel.

The shader below generates a velocity texture by calculating a procedural vortex pattern.

glsl
1// GLSL - Procedural Velocity Field Shader
2#version 300 es
3precision highp float;
4in vec2 vUV;
5out vec4 outColor;
6uniform float uAspectRatio;
7
8void main() {
9 // Center the coordinates to (-0.5, 0.5) range
10 vec2 centeredUV = vUV - 0.5;
11
12 // Account for aspect ratio
13 centeredUV.x *= uAspectRatio;
14
15 // Calculate the vortex velocity
16 float vx = -centeredUV.y;
17 float vy = centeredUV.x;
18
19 // Store the 2D vector in the R and G channels
20 outColor = vec4(vx, vy, 0.0, 1.0);
21}

This shader creates a simple counter-clockwise vortex centered on the screen. This shader only needs to run once to generate the velocity field and store it in a texture. The velocity values aren't normalized and will vary based on distance from center. In practice, these values might need scaling or a maximum velocity constraint to control simulation speed.

Semi-Lagrangian Method

The tricky part of implementing advection on the GPU is that each pixel in the output texture needs to determine what value it should have after the velocity field has moved quantities around. The naive forward-mapping approach of pushing values forward from their current positions creates problems - multiple source pixels might write to the same destination, while other destinations might receive no data at all, creating gaps or overlaps.

To avoid these problems, we use a backward-tracing approach, a common technique called the Semi-Lagrangian method. Instead of moving each pixel forward, the algorithm traces backward from each pixel's position to find the source location, then interpolates the value at that departure location and assigns it to the current pixel. Interpolation is needed since the source typically falls between pixel locations. It is used to smoothly blend values from the four surrounding pixels.

Its GPU implementation requires careful memory management. Since fragment shaders execute simultaneously across all pixels, they would all be reading from and writing to the same texture at once, creating unpredictable results. The ping-pong pattern, which was described in the previous article, solves this by reading from one texture and writing interpolated values to another.

p_source = p_current - (v × Δt)

Where:

  • p_source is the position obtained by tracing back.
  • p_current is the coordinate of the current pixel being calculated in the output texture (the destination).
  • v is the velocity vector at p_current, sampled from the velocity texture.
  • Δt is the timestep, a small constant that controls how far we step back in time.

Choosing the right timestep: A good starting value for Δt is 0.016 (for 60fps). The timestep isn't strictly tied to framerate but represents how far the simulation advances per iteration. Smaller values (0.008) create slower, more stable motion, larger values (0.032) create faster but potentially less stable results.

Once p_source is calculated, the quantity texture from the previous frame is sampled at that source coordinate. The sampled value becomes the new value for the current pixel. This "look-back" approach is stable, efficient, and perfect for a fragment shader.

Diagram illustrating the semi-Lagrangian method
Diagram illustrating the semi-Lagrangian method

Advection Shader

The next step involves creating an advection shader to animate textures or particles through that field. Advection simulates how substances (like dye, smoke, or particles) move along with the flow, creating the visual motion.

This shader performs the semi-Lagrangian advection step. It takes the velocity field and the previous frame's quantity texture as input and writes the newly advected quantity to the output.

glsl
1// GLSL - Advection Shader
2#version 300 es
3precision highp float;
4
5in vec2 vUV;
6out vec4 outColor;
7
8uniform sampler2D uQuantity;
9uniform sampler2D uVelocity;
10uniform float uDt;
11
12void main() {
13 // Sample the velocity field at the current pixel's position
14 vec2 velocity = texture(uVelocity, vUV).rg;
15
16 // Calculate the source position (look back in time)
17 vec2 sourceUV = vUV - velocity * uDt;
18
19 // Clamp to texture boundaries to prevent wrapping artifacts
20 sourceUV = clamp(sourceUV, 0.0, 1.0);
21
22 // Sample the quantity from the previous frame at the source position
23 vec4 advectedQuantity = texture(uQuantity, sourceUV);
24
25 // Apply a tiny bit of dissipation for visual effect
26 outColor = advectedQuantity * 0.9993;
27}

This shader is the core of the simulation. Repeatedly running it within the ping-pong setup creates a continuous flow.

The clamping creates boundaries where fluid sticks to edges. Alternative approaches should be chosen depending on the desired behavior.

The textures should use linear filtering (gl.LINEAR) for smoother interpolation, which is crucial for fluid-like motion. If the implementation needs to preserve sharp features nearest filtering should be used.

This advection approach is extremely efficient on GPUs since each pixel can be calculated independently, making it perfect for parallel processing.

JavaScript Render Loop

Real-time advection requires coordinating multiple render passes each frame. The render loop manages framebuffer operations, handles ping-pong buffer swapping, and sequences the GPU operations that make the simulation work.

The JavaScript code orchestrates the process. We need two quantity textures for dye that gets updated each frame (ping-pong technique), and a velocity texture. A common approach requires creating a separate framebuffer for each texture.

After initial setup (creating shader programs, framebuffers, and drawing the initial dye), the animation loop performs these steps each frame:

  • Advection Pass: Bind the write FBO as render target, use the advection shader program, set uniforms for the quantity texture (from read FBO) and velocity field, then draw a full-screen quad to perform the advection calculation.

  • Display Pass: Render the result to the screen by binding the canvas framebuffer and drawing another quad using the newly advected quantity texture.

  • Swap: Exchange the read/write framebuffers to prepare for the next frame iteration.

javascript
1// --- Render Loop ---
2function render() {
3 // --- Advection Pass ---
4 gl.bindFramebuffer(gl.FRAMEBUFFER, quantityFboPair.write.framebuffer);
5 gl.useProgram(advectionProgram);
6
7 // --- Set uniforms ---
8 // Uniform 0: The quantity from the previous frame
9 gl.activeTexture(gl.TEXTURE0);
10 gl.bindTexture(gl.TEXTURE_2D, quantityFboPair.read.texture);
11 gl.uniform1i(advectionUniforms.quantity, 0);
12
13 // Uniform 1: The static velocity field
14 gl.activeTexture(gl.TEXTURE1);
15 gl.bindTexture(gl.TEXTURE_2D, velocityFbo.texture);
16 gl.uniform1i(advectionUniforms.velocity, 1);
17
18 gl.uniform1f(advectionUniforms.dt, dt);
19
20 // Run the advection shader on a full-screen quad
21 gl.drawElements(gl.TRIANGLES, 6, gl.UNSIGNED_SHORT, 0);
22
23 // --- Display Pass ---
24 // Render the result to the canvas
25 gl.bindFramebuffer(gl.FRAMEBUFFER, null);
26 gl.useProgram(displayProgram);
27 gl.activeTexture(gl.TEXTURE0);
28 gl.bindTexture(gl.TEXTURE_2D, quantityFboPair.write.texture);
29 gl.uniform1i(displayUniforms.displayTexture, 0);
30
31 gl.drawElements(gl.TRIANGLES, 6, gl.UNSIGNED_SHORT, 0);
32
33 // --- Swap for next frame ---
34 quantityFboPair.swap();
35 requestAnimationFrame(render);
36}

This code snippet shows the essential render loop structure.

The complete implementation with shader setup and initialization is available in GitHub repo. Try Demo.

Demo Implementation Notes

Note that the initial scene shader is updated to create a gradient that transitions from orange at the center to green at the outer edge. It applies a smooth color transition based on distance from center. It helps to visualize the advection process.

Moving to Floating-Point Textures

Since implementing advection requires velocity data, this creates a new problem in texture configuration. Unlike the simple color data from the ping-pong article, velocity vectors need to represent both positive and negative values - flow can move left (-x) or right (+x), up (+y) or down (-y).

javascript
1// Previous approach (ping-pong article)
2gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
3
4// Better approach for velocity data
5gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA16F, width, height, 0, gl.RGBA, gl.HALF_FLOAT, null);

Standard 8-bit textures can only store values from 0 to 1, which means negative velocity values get clamped to zero, corrupting the flow direction data. To prevent this corruption, the remapping approach is used to shift velocity ranges like [-0.5, 0.5] into [0, 1] for storage, then unmap them back in shaders.

Floating-point textures eliminate this problem. They can store negative values directly, giving cleaner shader code and better precision. The velocity shader can output vec4(-0.3, 0.2, 0.0, 1.0) and the advection shader can read exactly that - no mathematical conversions required.

To use floating-point textures, request access to the required extension:

javascript
1const ext = gl.getExtension('EXT_color_buffer_float');

The trade-off is memory usage - 16-bit floats use twice as much GPU memory as 8-bit integers. For fluid simulation, this extra precision is worth it. The resulting code is easier to understand, debug, and extend.

Most modern browsers support the EXT_color_buffer_float extension needed for this approach, making it a practical choice for applications.

Summary and What's Next

This article covered the foundation of fluid simulation by implementing advection - the process of moving data through a grid using a velocity field. This is a core concept in graphics and scientific computing.

Key concepts:

  • Semi-Lagrangian Method: An efficient and stable "look-back" algorithm that is ideal for implementation in a fragment shader.
  • Ping-Pong Buffers: A technique for managing evolving simulation state, passing data from one frame to the next.
  • Static Velocity Fields: Creating predetermined flow patterns like vortices for controlled advection effects.

However, the fluid simulation is still incomplete. The velocity field is static and doesn't react to the fluid itself. Coming up next: the Divergence and Pressure steps, which calculate how the fluid should behave, making the velocity field dynamic and interactive.


This is part of my series on implementing interactive 3D visualizations with WebGL 2.

Let's talk