Upload
steven-shelton
View
223
Download
0
Tags:
Embed Size (px)
Citation preview
WebGL: in-browser 3D graphics
Nick WhiteleggMaritme and Technology FacultySouthampton Solent University
WebGL
OpenGL and OpenGL ES
WebGL and how it differs
Shaders
Key components of a WebGL application
OpenGL
The standard API for cross-platform 3D graphics
Shapes usually made up of triangles (graphics cards are optimised for rendering triangles)
Each triangle drawn separately
OpenGL - example
Draws two triangles (6 vertices):
glBegin(GL_TRIANGLES);glVertex3f(1.0f, 1.0f, 1.0f);glVertex3f(1.0f, 0.5f, 1.0f);glVertex3f(0.5f, 1.0f, 1.0f);glVertex3f(2.0f, 2.0f, 1.0f);glVertex3f(3.0f, 3.0f, 1.0f);glVertex3f(2.0f, 3.0f, 1.0f);glEnd();
Modelview and projection matrices
Central to OpenGL are the concepts of world coordinates and eye coordinates
World coordinates represent how the 3D data is actually stored in code
Eye coordinates represent where the data is with respect to the user's current view of the world
The modelview matrix is the matrix which transforms world coordinates to eye coordinates
There is also the perspective, or projection matrix, which specifies how the scene is altered according to perspective (horizontal field-of-view, aspect ratio, near and far clipping planes)
OpenGL ESVersion of OpenGL optimised for devices with limited memory, e.g. mobile devices
API significantly different to standard OpenGL
In particular, drawing shapes takes a different approach: rather than drawing each triangle individually, all vertices making up a complex shape are sent direct to the graphics card at once for fast, efficient drawing
WebGL uses OpenGL ES: every C-based API call has a WebGL call in JavaScript
Vertex buffer
The OpenGL ES approach is to send all vertices to graphics card as a buffer
This allows the card to efficiently draw all vertices, and therefore all shapes, at once
ShadersWebGL (and all OpenGL ES 2.0 implementations) require the use of shaders
Shaders are small programs, written in a C-like language, which run on the graphics card (GPU) and specify how vertices, and thus shapes, appear on screen (position, colour, lighting, textures, etc)
A shader-based OpenGL application will consist of a standard CPU-based program plus a series of shaders running on the GPU
The CPU program passes information to the shaders
Vertex and fragment shaders
There are two types of shader:
Vertex shaders – specify how vertices appear: how they are transformed from world space (how they are stored in code) to eye space (where they appear on-screen)
Fragment (or pixel) shaders – determine how pixels appear on-screen (colour, lighting effects, etc)
Vertex shaders run before fragment shaders in the rendering process
Shader variables
There are three classes:
Attribute variables – for quantities which differ for each vertex (e.g. vertex position)
Uniform variables – for quantities which remain the same per render (e.g. the modelview matrix, light position)
Varying variables – used to pass information from vertex to fragment shader (only the vertex shader can read information input from the main CPU-based program)
Vertex shader
Vertex shader:
attribute vec4 aVertexPosition;
void main(void)
{
gl_Position = aVertexPosition;
}
Vertex shader
This shader sets the position of the current vertex (gl_Position; built-in GLSL variable) to the attribute variable aVertexPosition
The attribute variable aVertexPosition (data type vec4; inbuilt GLSL data type) will contain the current position from the vertex buffer; each vertex from the buffer in turn is sent to the shader
In our JavaScript code we link the buffer with the aVertexPosition variable
Fragment shader
void main (void)
{
gl_FragColor = vec4(1.0f, 0.0f, 0.0f, 1.0f);
}
Fragment shader
In this case the fragment shader is simple
All we do is set the colour of the current fragment (area of pixels on screen surrounding a given vertex) to red (RGBA 1,0,0,1)
gl_FragColor is an inbuilt GLSL variable representing the current fragment colour
Note use of vec4 again
More complex vertex shader
attribute vec4 aColour, aVertex;
uniform mat4 uMv, uPersp;
varying vec4 colour;
void main (void)
{
gl_VertexPosition =uMv*uPersp*vec4(aVertex,1.0);
colour=aColour;
}
More complex vertex shaderNote how in this vertex shader we calculate the vertex position (eye coordinates) by multiplying the input vertex position (world coordinates) by the modelview and perspective matrices
We also read in the colour from the attribute variable aColour (linked to a buffer, like aVertex) and save in the varying variable colour
This is because fragment shaders cannot read in attributes directly (because they execute after vertex shaders and are thus not connected directly to the JavaScript); so we save the colour to a varying variable for later use (in the fragment shader)
Varying variables are commonly used for lighting effects, where a pixel colour may depend on the relationship of a vertex position to a light source
Components of a WebGL application
HTML5 and CSS for the user interface, including a <canvas> tag for the 3D scene
JavaScript to process user events and communicate with servers if necessary
Shaders running on the GPU to control rendering
Architecture of a WebGL application
WebGL applications take place partly in the web browser (via JavaScript) and partly on the GPU (via shaders)
A WebGL application generally works in this way:
JavaScript responds to user events by, e.g. changing the position of objects in the 3D scene, rotating camera, etc
These changes are then communicated to shaders on the GPU which actually do the rendering
Thus hardware-accelerated rendering is achieved
Simple WebGL ExampleWe will now examine a simple WebGL example from start to finish
The example,www.free-map.org.uk/~nick/webgl/ex1.html, draws two red triangles
It illustrates:
How to create a vertex buffer and send it to the graphics card
How to tell the vertex shader to use data from the vertex buffer to determine where to draw the vertices
How to tell the fragment shader which colour to draw the vertices
How a WebGL application starts up
WebGL initialised from within JavaScript
Shaders (embedded within the HTML) are read in via DOM and compiled to native GPU code
WebGL startup codefunction init(){
var canvas = document.getElementById('canvas1');var gl = null;try{
gl=canvas.getContext('experimental-webgl');}catch(e) {}if(!gl){
try
{
gl=canvas.getContext('webkit-3d');
}
catch(e) {}
}
if(gl)
{
.. continue processing
}else
{
alert(“no webgl support”);
}
}
WebGL startup code - explanation
We obtain a canvas using the DOM and then a canvas 3D context
This is done slightly differently in Mozilla and WebKit (Chrome, Safari etc) hence the two different startups
If neither succeed, we inform the user that there is no WebGL support
Loading shaders
Each shader embedded in the HTML source with a standard DOM ID
We use the DOM to obtain the shader source
… and then compile it to an executable form
Code to do this is long, so not reproduced here, but available in the example
Setting up a vertex buffer
var buffer1 = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer1);
var vertices=[0.0, 0.0, 0.0,
1.0,0.0, 0.0,0.5, 1.0, 0.0 ];
gl.bufferData(gl.ARRAY_BUFFER,
new Float32Array(vertices),
gl.STATIC_DRAW);
Obtaining a handle on the shader variable from
JavaScript
To be able to use a shader variable from JavaScript we need to get a “handle” on it
Use gl.getAttributeLocation() to do this
The code below shows how to do this for a shader variable aVertex:
var va=gl.getAttributeLocation(shaderProgram, “aVertex”);
gl.enableVertexAttribArray(va);
Actually drawing
// Select the vertex buffer to use
gl.bindBuffer(gl.ARRAY_BUFFER,buffer1);
// Select the “handle” on the shader variable (see last slide)
gl.vertexAttribPointer(va,3,gl.FLOAT,false,0,0);
// Draw the triangle, this will use the buffer and pass each vertex to the shader in turn
// 0 = index of first vertex, 3 = number of vertices
gl.drawArrays(gl.TRIANGLES,0,3);
Extending to draw two triangles
Create a buffer with the vertex coordinates for both triangles
Otherwise the same procedure, except for the line to actually draw the vertices in the buffer:
gl.drawArrays(gl.TRIANGLES, 0, 6);
6 because we now have 6 vertices, not 3
Variable colour triangles
Imagine we want a triangle with different colours at each vertex (red,green, blue), which blend in to create a “multicolour” effect in the centre of the triangle, as in http://www.free-map.org.uk/~nick/webgl/ex2.html
How do we do that?
Create a colour buffer as well as a vertex buffer
Order of colours in colour buffer matches order of vertices in vertex buffer
Variable colour triangles: code
This example sets the vertices to red, green and blue respectively
//Set up handles on shader variables for vertices and coloursp_vrtx = gl.getAttribLocation(shaderProgram,”aVertex”);gl.enableVertexAttribArray(p_vrtx);p_col = gl.getAttribLocation(shaderProgram,”aColour”);gl.enableVertexAttribArray(p_col);// make buffers for vertices and colours var vertices = [ vertex coords ];var buffer1 = makeBuffer(vertices);var colours = [ 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0 ];var colourBuffer = makeBuffer(colours);// do the drawing – associate each buffer with the corresponding shader variablegl.vertexAttribPointer(p_vrtx,3,gl.FLOAT,false,0,0);gl.bindBuffer(gl.ARRAY_BUFFER,buffer1);gl.vertexAttribPointer(p_col,3,gl.FLOAT,false,0,0);gl.bindBuffer(gl.ARRAY_BUFFER,colourBuffer);gl.drawArrays(gl.TRIANGLES, 0, 3);
Variable colour triangles - explanation
First we get handles on the two shader variables we need (for vertex coords and colour)
Then we make the buffers, assume that makeBuffer() is a function which sets up the buffer as in the first example
Then we do the drawing – note that we associate each buffer with the corresponding shader variable and then issue the command to draw
Communicating matrices to the shader
Recall that the modelview and perspective matrices in the shader are uniform variables (i.e. stay the same for all vertices)
We typically also use a variable to store each matrix within JavaScript, so that we can keep track of, e.g. camera orientation/position changes in response to user events
Each time the JavaScript variable changes, we send the updates to the shader
Communicating matrices to the shader - details
Obtaining a “pointer” to the shader variable from JavaScript:p_umvMtx = gl.getUniformLocation(shaderProgram,”umvMtx”);
where umvMtx is the shader variable
Sending the updated JavaScript matrix to the corresponding shader variable:gl.uniformMatrix4fv(p_umvMtx,false,new
Float32Array(mvmtx.flatten()));
where mvmtx is the JS variable representing the matrix
WebGL wrapper libraries
As you can probably appreciate, the developer has to issue a long sequence of function calls to render a scene
As a developer you will probably end up wrapping these in a library
Alternative: use pre-built wrapper libraries, e.g. three.js
Disadvantage: less control, have to learn the wrapper as well as the WebGL API itself
Helper libraries
Even if using raw WebGL, it's helpful to use libraries for basic matrix manipulations and controlling perspective
Two useful helper libraries:
Sylvester (sylvester.jcoglan.com), matrix/vector maths
glUtils, useful functions for basic OpenGL operations such as setting perspective (public domain, available from e.g. free-map.org.uk/~nick/webgl/glUtils.js)
Links
www.khronos.org/webgl - official site
http://www.learningwebgl.com, very good tutorial series (blog-based)
www.free-map.org.uk/~nick/webgl/ - my own examples
www.free-map.org.uk/3d/ - own example combining OpenStreetMap data and NASA SRTM height data