Overview of the set of homeworks

This page contains useful info. that can help you do your homework assignments. It also provides some explanation of each stage (the 6 HWs together lead to your implementing a simple, but useful and complete scanline renderer, in steps that build on each other).

First, a note on programming languages/platforms. As mentioned in class, you have three choices:

Here is a little bit on each [note - the Processing and canvas code fragments are results of modifying and adding to bits of existing code on the web].

C/C++

This is how these homeworks have 'always' been done (by students in the past). The .zip file provided for each assignment contains files that are C++-based, to be used with Visual Studio. Alternately, you can use C++ on Macs, or on USC's 'aludra' workstations. In all cases, you'd be coding in vanilla C++, without the use of ANY additional publicly available library, including but not limited to OpenGL, image-reading/writing, etc. You are only allowed to set pixel values in a PPM image file structure, and write it out as a viewable (eg. using IrfanView, 'xv' and many other image readers) .ppm image file.

Processing

Processing is an extremely simple, artist-friendly programming language that is based on Java. A Javascript port is available as well. If you use Processing, be aware that you CANNOT use any built-in function for drawing lines, polys, etc, nor can you use OpenGL (a version of which comes with Processing). You can ONLY use a setPixel() call, defined as shown below:

As you can see from the above, we define setPixel as

void setPixel(int x, int y) {
  line(x,y,x,y);
}

In other words, we use the built-in line() call to draw a single-pixel line, ie. to plot a single pixel (which you can see on the right, as a white pixel against a black bg). To repeat, all your HWs should only use setPixel() as defined above, no additional Processing or extra library calls are allowed.

Note - you'll need to 'translate' the intent of each homework into Processing reqs. Your TA can assist with this.

Javascript/canvas

The HTML5 spec includes a 'canvas' element, which is a 2D surface includable in any web page. On such a 2D surface, the spec defines a variety of primitives such as line, arc, rectangle, Bezier curve, text, etc. For your HWs, you are ONLY allowed to use a simple point plotting call, as defined below:

As you can see, setPixel is:

function setPixel(imageData, x, y, r, g, b, a) {
    index = (x + y * imageData.width) * 4;
    imageData.data[index+0] = r;
    imageData.data[index+1] = g;
    imageData.data[index+2] = b;
    imageData.data[index+3] = a;
}

Again, you cannot use any other built-in call (eg. rect()), or additional Javascript library calls, including but limited to WebGL, other rendering calls, etc.

Note - as with Processing above, you need to translate each HW's reqs into Javascript/canvas equivalents (you can ask your TA to help).

The 'magic' of rendering

What is the point of restricting you to using a setPixel() call, to do all your HWs? A single pixel can have one of ~16.7 million RGB color values (256x256x256, since the R,G,B channels can each have a value between 0..255). If you can plot a single pixel on a 2D canvas, you can plot ("render") a group of them that belong to a 2D polygon (eg. a triangle), which in turn comes from a poly in 3D space. If you can render a single polygon, you can render a group of polygons that define a polymesh, ie. you can render a 3D object. If you can render one polymesh, you can render a group, ie. a "scene". It all leads back to being able to color (supply an RGB value for) just one pixel.

By incorporating additional calculations to account for lighting and material behavior, and by using pixel values from image 'maps', your rendered pixels create extremely sophisticated imagery, drawing from a color palette of ~16.7M colors - that is pretty much how images for games, visual effects and animation are created (note - these certainly involve modeling, animation, camera movement and 'fx' calculations too - here we are only talking about the rendering, ie. pixel-creation aspects).

To put it a different way, if you can supply the necessary calculations to color each pixel of an image, you can render ANYTHING at all, assuming you have a way to color a single pixel :) Now do you see why we insist that you use nothing but a pixel-coloring function? The idea is that you'd code all else, to create renders. The purpose of this exercise ("building a renderer from the ground up") is to impart you with detailed knowledge of the inner workings of a renderer.

Let us see an example of this. The following images offer a pictorial view of the steps that parallel what you'll be doing in each HW.

HW2: triangle scan converter

HW3: transformations

HW4: shading

HW5: texture-mapping

Here is the img used as the texture map.

HW6: anti-aliasing