Class 9: Pixel arithmetic

All sorts of math operations can be performed on images.. In addition, images can be created from scratch using math equations and functions.

- image raster: pixel array
A RECTANGULAR collection of picture elements. Each pixel is a 'cell', with an (i,j) index running from (0,0) to (xres-1,yres-1)

__pixel array__ - point operations - eg. brightness, contrast.., color->grey
Point ops involve a calculation on JUST one pixel at a time, repeated for all pixels in an img.

Brightness: color <- color*brightness_value.

__brighness enhancement - linear__The brightness factor can be made non-linear first, using a bias() function:

bias(b,t) = t^(ln(b)/ln(0.5)) When brightness is non-linearly adjusted this way, it's called "gamma correction".

__gamma correction__Contrast: use a s-curve, eg. a gain() function:

gain(g,t)= if (t<0.5) then bias(1-g,2*t) else 1-bias(1-g,2-2*2t)/2

__contrast enhancement__Notice that we're brightening the bright regions and darkening the dark..

Color correction - modify R,G,B values of pixels, often using a lookup table or a 'transfer curve'

Here is another point process:

__solarization____Color correction demo__Likewise, converting RGB image to "black and white" (grayscale, really) is also a point operation.

__RGB -> gray__Operations such as color cycling, posterization (quantization), histogram equalization, etc. are also point processes.

- area operations - 'convolution' - blurring, sharpening, edge detection
Here we compute a new value (color/grayscale) for each pixel by considering pixels in its neighborhood also, not just the pixel being processed. The rectangular grid of multiplication (scale) factors around a pixel's neighborhood (including itself) is called a 'convolution kernel' or 'mask'. You can imagine this mask being slid over every pixel to alter its value (compute a replacement).

__Convolution - 1____Convolution - 2____Convolution - 3____Convolution - 4__ - inverse/reverse mapping
When it comes to non-point/non-kernel processes (eg. image distortion), 'forward mapping' (calculating where source pixel ends up on a target grid) can cause overlaps/holes! So what's commonly done is 'inverse mapping', where every target pixel is synthesized from source pixels.

__inverse mapping__ - resizing - filters
While resizing images, we use filters such as 'box', 'triangle' and 'bicubic'.

__Bicubic interpolation__ - image warping, use of noise, uv-based lookup
__Lecture from Princeton U.____Another page on warping____One more..__ - lens corrections
These are special warps meant to (de)warp imagery..

__Dewarping____Warping/Dewarping____Warping/Dewarping - more__ - texture, displacement,normal maps
__Texture mapping - 1____Texture mapping - 2____Texture mapping - 3____Texture mapping - 4____Texture mapping - 5____Texture mapping - 6____Texture mapping - 7____Displacement mapping__In normal mapping, we use an image (normal map) to look up normals for lighting purposes.. So it is really 'texture based bump mapping"..

__Normal mapping__ - combining images - adding, etc.; use of alpha channel
__Compositing__ - image synthesis: simple patterns (ramp, checkerboard..), CA, fractals, genetic algorithms, IFS, chaos..
You can create a variety of patterns, pretty images, etc. by CALCULATING a color/grayscale value for each pixel..
Link: http://en.wikipedia.org/wiki/Mandelbrot_set