In part 1 of this series, we discussed the mathematical preliminaries for a basic, didactic software rendering scheme for three-dimensional graphics. In this second part, we will build the renderer using HTML5 canvas and javascript. Canvas is the ideal technology for exploring the basics of graphics, because (i) reloading a page is an incredibly fast way to iterate and test small changes in code, and (ii) everybody has access to a web browser so there is no barrier to entry for development (setting up an IDE, etc.) or dissemination.
We don’t need much scaffolding or any libraries. This demo is going to end up as a self-contained HTML file with less than 100 lines of code. We will start in a blank text file with this:
<canvas id="c" width="800" height="450" style="border: 1px solid black"></canvas>
<script type="text/javascript">
</script>
Everything that follows will go in the script
tag.
We’ll have occasion to use some helper functions. These two are general purpose vector operations, where we represent vectors by length-three javascript arrays.
function Dot(V, W)
{
var D = 0;
for (var i = 0; i < 3; i++) D += V[i] * W[i];
return D;
}
function Cross(V, W)
{
return [ V[1] * W[2] - V[2] * W[1], V[2] * W[0] - V[0] * W[2], V[0] * W[1] - V[1] * W[2] ];
}
We can define a Camera
object which holds all of the properties discussed in the previous article:
var Camera =
{
"p": [ 0, 0, 1 ],
"c": [ 0, 0, -1 ],
"u": [ 0, 1, 0 ],
"d0": 1
};
Camera.g = Cross(Camera.c, Camera.u);
The members of Camera
are named according to the nomenclature of part 1. The observer (Camera.p
) is at in our space, and it is looking straight along the -axis, in the negative direction (Camera.c
). The camera is “right-side up”, with Camera.u
being in the positive direction.
Let’s take a moment to orient ourselves in our three-dimensional space: We want and to behave in the usual way, so that a positive change in corresponds to movement to the right, and a positive change in corresponds to a movement upwards. In that case, a right-handed coordinate system has coming out of the screen towards us. Therefore movement deeper into the virtual environment corresponds to increasingly negative values of , hence the -1
in Camera.c
.
People who are already familiar with the HTML5 canvas may know that (i) increasing in canvas coordinates corresponds to movement downwards, and (ii) canvas coordinates are relative to the top-left, not the center. For those reasons, we will define
function ScreenToCanvasCoordinates(ScreenX, ScreenY)
{
return { "X": 400 + PixelsPerUnitLength * ScreenX, "Y": 225 - PixelsPerUnitLength * ScreenY };
}
This function converts from “screen coordinates” to “canvas coordinates”, with “screen” having the abstract meaning from the earlier article. Canvas coordinates are measured in pixels, relative to the top-left corner of the HTML canvas. The 400
and 225
correspond to half of the chosen width and height for the canvas DOM element. The subtraction in computing Y
is to account for the upside-down -axis employed by the canvas. In this way, we can do all of our work in a sane coordinate system and deal with all of the canvas quirks in just one location.
Screen coordinates are measured on, and relative to the center of, the abstract screen defined in the previous article. The unit of length on the screen is the same as in our three-dimensional space, which is to say completely arbitrary. We have already made reference to a conversion factor
var PixelsPerUnitLength = 500;
which allows us to convert between lengths in our abstract space and pixels on the canvas. The value 500
is chosen arbitrarily, but it has a clear effect on the size of rendered objects.
We finally come to it: we are going to render a scene. We will specify a set of points with three-dimensional coordinates, and then draw them in the perspective seen by the camera. Supposing we have an array Points
, each element of which is an object { "r": [...], "Color": ... }
, our drawing routine will look like this:
// 0. Get a drawing context for the HTML canvas.
function Draw()
{
// 1. Clear the canvas
// 2. Iterate through Points
// Compute the screen coordinates for the point
// If the point is behind the screen, ignore it
// Convert screen coordinates to canvas coordinates
// Draw the point on the canvas
}
It’s straightforward to flesh out the pseudocode:
var Context = document.getElementById("c").getContext("2d");
function Draw()
{
Context.clearRect(0, 0, Context.canvas.width, Context.canvas.height);
Points.sort(function(A, B) { return A.r[2] - B.r[2]; });
Points.forEach
(
function(P)
{
var Offset = [ P.r[0] - Camera.p[0], P.r[1] - Camera.p[1], P.r[2] - Camera.p[2] ];
var d = Dot(Offset, Camera.c);
if (d <= 0) return;
var ScaleFactor = Camera.d0 / d;
var ScreenX = ScaleFactor * Dot(Offset, Camera.g);
var ScreenY = ScaleFactor * Dot(Offset, Camera.u);
var Coords = ScreenToCanvasCoordinates(ScreenX, ScreenY);
Context.fillStyle = P.Color;
Context.beginPath();
Context.arc(Coords.X, Coords.Y, PixelsPerUnitLength * ScaleFactor * 0.1, 0, 2 * Math.PI);
Context.fill();
}
);
}
The quantity Offset
corresponds to and d
is . Each point is drawn on the canvas as a solid disk, with the radius scaled to indicate depth. The sorting of the points makes the renderer draw the far points first; that way, things that are close are drawn in front of things that are far. This is a very rudimentary way to handle depth. It won’t be good enough when we start to draw things more complicated than points, but that will be discussed in a later article.
Now if we populate the Points
array and call Draw()
, we will see the scene in perspective.
The barebones demonstration is a 77 line HTML file with no external dependence (and a fourth of those lines are just defining the points that get rendered). Below, we use knockout.js to make a more interactive demo.
There are many interesting topics to explore from this point. Some which I hope to discuss in future articles are