Class 9: Pixel arithmetic
All sorts of math operations can be performed on images.. In addition, images can be created from scratch using math equations and functions.
• image raster: pixel array

A RECTANGULAR collection of picture elements. Each pixel is a 'cell', with an (i,j) index running from (0,0) to (xres-1,yres-1)

pixel array

• point operations - eg. brightness, contrast.., color->grey

Point ops involve a calculation on JUST one pixel at a time, repeated for all pixels in an img.

Brightness: color <- color*brightness_value.

brighness enhancement - linear

The brightness factor can be made non-linear first, using a bias() function:

```bias(b,t) = t^(ln(b)/ln(0.5))

When brightness is non-linearly adjusted this way, it's called "gamma correction".
```

gamma correction

Contrast: use a s-curve, eg. a gain() function:

```gain(g,t)= if (t<0.5) then bias(1-g,2*t) else 1-bias(1-g,2-2*2t)/2
```

contrast enhancement

Notice that we're brightening the bright regions and darkening the dark..

Color correction - modify R,G,B values of pixels, often using a lookup table or a 'transfer curve'

Here is another point process:

solarization

Color correction demo

Likewise, converting RGB image to "black and white" (grayscale, really) is also a point operation.

RGB -> gray

Operations such as color cycling, posterization (quantization), histogram equalization, etc. are also point processes.

• area operations - 'convolution' - blurring, sharpening, edge detection

Here we compute a new value (color/grayscale) for each pixel by considering pixels in its neighborhood also, not just the pixel being processed. The rectangular grid of multiplication (scale) factors around a pixel's neighborhood (including itself) is called a 'convolution kernel' or 'mask'. You can imagine this mask being slid over every pixel to alter its value (compute a replacement).

Convolution - 1

Convolution - 2

Convolution - 3

Convolution - 4

• inverse/reverse mapping

When it comes to non-point/non-kernel processes (eg. image distortion), 'forward mapping' (calculating where source pixel ends up on a target grid) can cause overlaps/holes! So what's commonly done is 'inverse mapping', where every target pixel is synthesized from source pixels.

inverse mapping

• resizing - filters

While resizing images, we use filters such as 'box', 'triangle' and 'bicubic'.

Bicubic interpolation

• image warping, use of noise, uv-based lookup

Lecture from Princeton U.

Another page on warping

One more..

• lens corrections

These are special warps meant to (de)warp imagery..

Dewarping

Warping/Dewarping

Warping/Dewarping - more

• texture, displacement,normal maps

Texture mapping - 1

Texture mapping - 2

Texture mapping - 3

Texture mapping - 4

Texture mapping - 5

Texture mapping - 6

Texture mapping - 7

Displacement mapping

In normal mapping, we use an image (normal map) to look up normals for lighting purposes.. So it is really 'texture based bump mapping"..

Normal mapping

• combining images - adding, etc.; use of alpha channel

Compositing

• image synthesis: simple patterns (ramp, checkerboard..), CA, fractals, genetic algorithms, IFS, chaos.. You can create a variety of patterns, pretty images, etc. by CALCULATING a color/grayscale value for each pixel..