WebGL 2 Basics: Drawing a Full-Screen Quad

Table of Content
As mentioned in the previous article, WebGL 2 uses a programmable pipeline. This article covers shaders and GLSL language fundamentals - the essential building blocks before tackling fluid simulation.
Before creating complex effects, you need to understand how WebGL fundamentally renders anything to the screen. Unlike declarative approaches like HTML/CSS, WebGL requires to explicitly define the geometry and the appearance using custom programs. These programs, called shaders, are written in GLSL and executed directly on the GPU.
Introduction to GLSL ES 3.0
GLSL ES 3.0 (OpenGL Shading Language for Embedded Systems version 3.0) is a high-level shading language with a C-like syntax, specifically designed for graphics processing units (GPUs) in embedded systems and web browsers.
The GLSL ES 3.00 specification requires every shader to begin with #version 300 es
to specify GLSL version for Embedded Systems. Without this directive, the shader won't compile. It also requires precision qualifiers in fragment shaders (e.g., precision mediump float;
)
GLSL ES 3.0 Type System
Type System is the building blocks of GLSL ES 3.0 code. Understanding the type system will help declare variables correctly and use built-in functions without confusion.
Scalar Types
float
(32-bit IEEE 754)int
(32-bit signed)uint
(32-bit unsigned)bool
These are the simplest types: single numeric or boolean values.
Vector Types
- Floating-point:
vec2
,vec3
,vec4
- Integer:
ivec2
,ivec3
,ivec4
- Unsigned:
uvec2
,uvec3
,uvec4
- Boolean:
bvec2
,bvec3
,bvec4
GPUs process vector operations efficiently in parallel.
glsl1vec4 color = vec4(1.0, 0.5, 0.2, 1.0);2float r = color.r; // Same as color.x or color[0]3vec3 bgr = color.bgr; // Swizzle: vec3(0.2, 0.5, 1.0)
Matrix and Sampler Types
- Matrices:
mat2
,mat3
,mat4
(plus non-square variants) - Texture samplers:
sampler2D
,sampler3D
,samplerCube
- Integer samplers:
isampler2D
,usampler2D
A sampler type (e.g. sampler2D
) tells the compiler you want to sample a 2D texture inside your shader. You’ll see these used in fragment shaders when you write expressions like texture(uTexture, vUV)
.
Variable Qualifiers
In GLSL, you also need to tell the compiler how data moves into and out of each shader stage. That’s where qualifiers like in
, out
, and uniform
come in.
in
: Input data (per-vertex in vertex shader, interpolated in fragment shader)out
: Output data (interpolated outputs in vertex shader, framebuffer outputs in fragment shader)uniform
: Constant values for entire draw call
Uniform Blocks
WebGL 2 supports grouping uniforms into blocks for efficient updates. The GLSL code declares the uniform block structure in the shader:
glsl1layout(std140) uniform TransformBlock {2 mat4 model;3 mat4 view;4 mat4 projection;5} transform;
The JavaScript code creates a buffer and uploads data to match that structure:
javascript1// Create and bind uniform buffer2const ubo = gl.createBuffer();3gl.bindBuffer(gl.UNIFORM_BUFFER, ubo);4gl.bufferData(gl.UNIFORM_BUFFER, transformData, gl.DYNAMIC_DRAW);56// Associate with shader program7const blockIndex = gl.getUniformBlockIndex(program, 'TransformBlock');8gl.uniformBlockBinding(program, blockIndex, 0);9gl.bindBufferBase(gl.UNIFORM_BUFFER, 0, ubo);
The std140
layout is a standardized memory layout rule. It ensures all GPUs organize uniform buffer data the same way. Without this standard, your code might work on one graphics card but fail on another due to different memory arrangements.
WebGL 2 Vertex Specification Enhancements
WebGL 2 (GLSL ES 3.0) added several conveniences that cut down on boilerplate JavaScript and reduce errors when setting up vertex data.
Vertex Array Objects (VAOs)
Instead of binding and configuring each attribute (position, normal, UV, etc.) every time you draw, you can store that entire setup in a VAO.
VAOs encapsulate vertex attribute configuration, storing buffer bindings, attribute pointers, and enable states. This eliminates repetitive setup calls:
javascript1const vao = gl.createVertexArray();2gl.bindVertexArray(vao);3// Configure all attributes once4gl.bindVertexArray(null);56// Later: single call to use7gl.bindVertexArray(vao);8gl.drawArrays(gl.TRIANGLES, 0, vertexCount);
Explicit Attribute Locations
Instead of querying attribute indices at runtime with gl.getAttribLocation
, you can hard-code the location right in your shader.
GLSL ES 3.0 supports explicit attribute location assignment, eliminating the need for getAttribLocation
queries:
glsl1layout(location = 0) in vec2 aPosition;2layout(location = 1) in vec3 aColor;
JavaScript can then reference attributes numerically: gl.enableVertexAttribArray(0)
.
Shaders
A shader is a program written in GLSL for processing graphics data. There are two main types of shaders that work together to render graphics: the vertex shader and the fragment shader.
GLSL provides predefined built-in variables that handle communication between shader stages and the GPU.
Vertex Shader Built-ins:
gl_Position
Must be written to the vertex shader. It’s the clip-space (x, y, z, w) output.gl_VertexID
Automatically provides the index of the current vertex (useful for procedural geometry).gl_InstanceID
When doing instanced draws, this tells you which instance you’re on.
Fragment Shader Built-ins:
gl_FragCoord
The window-space coordinates(x, y, z, w)
of the current fragment center, wherez
is depth andw
is1/w
from clip space.gl_FragCoord.y
uses bottom-left origin (OpenGL convention), so manual flipping may be needed for top-left origin coordinates.gl_FrontFacing
A boolean telling you whether the primitive was front-facing or back-facing.gl_FragDepth
Allows the fragment shader to write a custom value to the depth buffer
Vertex Shader
The vertex shader executes once per vertex with mandatory output gl_Position
in clip-space coordinates. It receives per-vertex data through in
variables and passes interpolated data to the fragment shader via out
variables:
glsl1#version 300 es23in vec3 aPosition; // Per-vertex position from buffer4in vec3 aColor; // Per-vertex color from buffer5out vec3 vColor; // Interpolated color for fragments67uniform mat4 uModelViewMatrix; // Constant for all vertices8uniform mat4 uProjectionMatrix; // Constant for all vertices910void main() {11 // Transform to clip space with w=1.0 for positions12 gl_Position = uProjectionMatrix * uModelViewMatrix * vec4(aPosition, 1.0);13 vColor = aColor; // Pass color to fragment shader14}
After vertex shader execution, the GPU clips primitives outside the clip volume and then the GPU's fixed-function hardware automatically performs perspective division (dividing by the w component) to convert clip-space to Normalized Device Coordinates (NDC).
Compute in vertex shader when possible — few vertices in the vertex shader vs millions of pixels in the fragment shader.
Fragment Shader
After NDC producing, the GPU’s fixed-function maps NDC to window (pixel) coordinates. The rasterizer then converts triangles (defined by vertices) into fragments. Each fragment corresponds to a pixel location that the triangle covers. It contains: screen position (x, y), depth value (z), other interpolated values from vertex shader outputs.
The fragment shader runs once per fragment, receiving interpolated values from the vertex shader. The GPU automatically interpolates all vertex shader outputs across each triangle's surface:
glsl1#version 300 es2precision mediump float;34in vec3 vColor; // Interpolated from vertex shader5out vec4 fragColor; // RGBA output to framebuffer67void main() {8 fragColor = vec4(vColor, 1.0); // Alpha = 1.0 for opaque9}
Output colors typically use a vec4 data type, representing red
, green
, blue
, and Alpha
(RGBA) components, with values often ranging from 0.0 to 1.0.
GPUs can trade accuracy for speed — using mediump
instead of highp
can improve performance on mobile devices while maintaining acceptable visual quality.
GPUs execute fragments in groups of threads (sometimes called warps or wavefronts). If a branch splits those threads — some go one way, some go another — the GPU ends up running both sides of the branch. That extra work makes branching less efficient:
glsl1// Efficient: branchless operations2float t = step(0.5, value); // Returns 0.0 or 1.03vec3 result = mix(colorA, colorB, t);45// Inefficient: dynamic branching6vec3 result = (value > 0.5) ? colorB : colorA;
Full-Screen Quad Geometry
Many advanced GPU effects — from motion blur to fluid dynamics — are often implemented using two triangles that define a quad. These two triangles form a rectangle, invoking the fragment shader — once for every pixel. If it's full-screen, we call this a 'full-screen quad'. This technique allows you to run a fragment shader on every pixel of the screen or the quad's area.
A full-screen quad typically uses two triangles to cover the viewport because GPUs are highly optimized for processing triangles. This method ensures the entire rectangular area is consistently covered:
Full-Screen Quad Implementation
This implementation renders a full-screen quad. We'll create a visual test pattern by outputting UV coordinates as colors - this technique helps verify that our geometry covers the viewport correctly and that our coordinate mapping works as expected. This same setup will later serve as the foundation for post-processing effects.
Shaders
glsl1// Vertex Shader2#version 300 es3layout(location = 0) in vec2 aPosition;4out vec2 vUV;56void main() {7 gl_Position = vec4(aPosition, 0.0, 1.0);8 // Account for WebGL's bottom-left origin:9 vUV = aPosition * 0.5 + 0.5;10 // Flip Y for top-left texture origin11 vUV.y = 1.0 - vUV.y;12}1314// Fragment Shader15#version 300 es16precision mediump float;17in vec2 vUV;18out vec4 fragColor;1920void main() {21 fragColor = vec4(vUV, 0.5, 1.0); // RG gradient visualization22}
JavaScript Setup
javascript1// Context acquisition2const gl = canvas.getContext('webgl2');3if (!gl) throw new Error('WebGL 2 not supported');45// Shader compilation6function compileShader(gl, source, type) {7 const shader = gl.createShader(type);8 gl.shaderSource(shader, source);9 gl.compileShader(shader);1011 if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {12 const info = gl.getShaderInfoLog(shader);13 gl.deleteShader(shader);14 throw new Error(`Compile failed: ${info}`);15 }16 return shader;17}1819// Program creation20function createProgram(gl, vertSrc, fragSrc) {21 const program = gl.createProgram();22 gl.attachShader(program, compileShader(gl, vertSrc, gl.VERTEX_SHADER));23 gl.attachShader(program, compileShader(gl, fragSrc, gl.FRAGMENT_SHADER));24 gl.linkProgram(program);2526 if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {27 throw new Error(`Link failed: ${gl.getProgramInfoLog(program)}`);28 }2930 return program;31}3233// Full-screen quad VAO34function createFullScreenQuad(gl) {35 const vao = gl.createVertexArray();36 gl.bindVertexArray(vao);3738 // Two triangles: [-1,-1] to [1,1]39 const vertices = new Float32Array([40 -1, -1, 1, -1, -1, 1, // Triangle 141 -1, 1, 1, -1, 1, 1, // Triangle 242 ]);4344 const buffer = gl.createBuffer();45 gl.bindBuffer(gl.ARRAY_BUFFER, buffer);46 gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);4748 gl.enableVertexAttribArray(0);49 gl.vertexAttribPointer(0, 2, gl.FLOAT, false, 0, 0);5051 gl.bindVertexArray(null);52 return vao;53}
This implementation renders a full-screen quad with a UV coordinate gradient, serving as the foundation for post-processing effects and GPU-based computations.
You can check out the full example by link and see the demo here
Debugging Techniques
Debugging shaders presents unique challenges because they execute on the GPU, separate from the JavaScript environment. You can't simply use console.log()
inside a GLSL shader to inspect values. So, you have to rely on specific techniques to understand program flow and data, helping to identify and fix issues.
Shader Compilation Checks
javascript1if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {2 console.error('Shader error:', gl.getShaderInfoLog(shader));3}
Visual Debugging
Output intermediate values as colors:
glsl1fragColor = vec4(vUV, 0.0, 1.0); // UV coordinates2fragColor = vec4(normalize(vNormal) * 0.5 + 0.5, 1.0); // Normals
Summary
Full-screen quads provide the foundation for GPU image processing and simulations. Two triangles covering the viewport create a one-to-one fragment-to-pixel mapping, enabling parallel computation across the framebuffer. This technique underlies post-processing effects, fluid simulations, and GPU-accelerated algorithms.
This is part of my series on implementing interactive 3D visualizations with WebGL 2.
Next article: Rendering to Textures with Framebuffers.