Multi-Camera Rendering

February, 2007

Introduction

RenderMan Interface

Dicing Cameras

Multi-Camera Output

Shading Considerations

Efficiency of Multi-Camera Rendering


Introduction

The 13.5 release introduces the concept of multiple, arbitrary cameras to RenderMan. These can be used to create images seen from multiple viewpoints in a single render pass, in less time than it would take to render multiple passes. This application note illustrates how to use this new functionality.

RenderMan Interface

A new RI call has been introduced to support arbitrarily defined cameras.

  RiCamera (RtToken name, ..parameterlist...)

RIB Binding:

  Camera name ...parameterlist...

Example:

  Camera "rightcamera"

This function marks the camera description from the current graphics state options, and saves it using name. This camera description can then be referred to by name in subsequent calls to RiAttribute or RiDisplay. The camera description that is saved includes:

The camera description which is created is itself an option (i.e. part of the global state). Hence, RiCamera is valid only before RiWorldBegin.

RiCamera also creates a marked coordinate system with the same name (similar to RiCoordinateSystem). This coordinate system can then be referred to by name in subsequent shaders, or in RiTransformPoints.

The renderer reserves two specially marked camera definitions: the current camera definition at RiFrameBegin is named "frame", and the current camera definition at RiWorldBegin is named "world". It is an error to define an arbitrary camera with these reserved names.

Dicing Cameras

Dicing cameras have been extended to support arbitrarily defined cameras. The "referencecamera" parameter to Attribute "dice" now takes the name of a camera defined using RiCamera. If such a camera is specified, it will take precedence over all other cameras when dicing calculations are performed.

Multi-Camera Output

Once multiple cameras have been created, a single render pass can take advantage of these cameras to render pixels from multiple viewpoints. This multi-camera output is enabled by using the AOV (arbitrary output variable) functionality. A new parameter "string camera" has been added to RiDisplay. This parameter accepts the name of a previously defined camera defined with RiCamera. When attached to a display, the image resulting from that display will be rendered from the viewpoint of the specified camera.

When multi-camera output is enabled, all geometry is still shaded exactly once. This is what allows this single pass approach to be faster than multiple pass renders: the shading results can be reused for multiple cameras. (Please see Efficiency Considerations for more information on speed and memory issues.) However, due to shading occuring only once, there are view dependent shading considerations; these will be explained in the next section.

To illustrate the usage, here is a simple example defining a "right" camera which has a different ScreenWindow than the "world" camera:

##RenderMan RIB
Projection "perspective" "fov" [45]
Format 256 256 1.0
PixelSamples 5 5
ShadingRate 0.25
Translate 0 -1 10
TransformBegin 
    ScreenWindow -0.75 1.25 -1 1
    Camera "right"
TransformEnd 
ScreenWindow -1.25 0.75 -1 1
Display "left.tif" "tiff" "rgba" 
DisplayChannel "float a"
DisplayChannel "color Ci"
Display "+right.tif" "tiff" "Ci,a" "quantize" [0 255 0 255] "string camera" ["right"]
FrameBegin 5
    TransformBegin 
        LightSource "shadowspot" "mylight" "float intensity" [250]
	    "point from" [0 3 -15]"point to" [0 0 0] "shadowname" "raytrace"
    TransformEnd 
    WorldBegin 
        AttributeBegin
	    Attribute "visibility" "string transmission" ["opaque"]
            Color [1 0.25 0.25]
            Surface "plastic" "float roughness" [0.05]"color specularcolor" [0 1 1]
            Translate 0 -1 0
            Rotate -90 1 0 0
            Geometry "teapot"
        AttributeEnd
        AttributeBegin
            Surface "texmap" "string texname" "ratGrid.tex"
            Scale 6 6 1
            Patch "bilinear" "P" [ -1 1 4 1 1 4 -1 -1 4 1 -1 4]
        AttributeEnd
    WorldEnd 
FrameEnd
 
left.tif: ScreenWindow -1.25 0.75 -1 1   right.tif: ScreenWindow -0.75 1.25 -1 1

In this example, since only the screen window has changed, the position of the eye hasn't changed between the two cameras. This means that from the two point of views, there is no difference in the occlusion of the background plane by the teapot; in other words, there is no obvious parallax, which is typically not what we associate with a stereo effect. Here's a more interesting example, which replaces the ScreenWindow shift with an actual Translate of the camera position. Here, the two cameras are separated by two units in the X axis.

##RenderMan RIB
Projection "perspective" "fov" [45]
Format 256 256 1.0
Translate 0 -1 10
TransformBegin 
    Translate -1.0 0 0
    Camera "right"
TransformEnd 
Translate 1.0 0 0
Display "left.tif" "tiff" "rgba" 
DisplayChannel "float a"
DisplayChannel "color Ci"
Display "+right.tif" "tiff" "Ci,a" "quantize" [0 255 0 255] "string camera" ["right"]
FrameBegin 5
    TransformBegin 
        LightSource "shadowspot" "mylight" "float intensity" [250]
	    "point from" [0 3 -15]"point to" [0 0 0] "shadowname" "raytrace"
    TransformEnd 
    WorldBegin 
        Attribute "visibility" "string transmission" ["opaque"]
        Color [1 0.25 0.25]
        Surface "plastic" "float roughness" [0.05]"color specularcolor" [0 1 1]
        Rotate -90 1 0 0
        Geometry "teapot"
    WorldEnd 
FrameEnd
 
left.tif: Translate 1.0 0 0   right.tif: Translate -1.0 0 0

Compared to the previous ScreenWindow example, the stereo effect is much more pronounced: the effects of Translate can be observed in the parallax of the teapot against the textured background. Note also that the diffuse illumination of the teapot is subtly different: from the point of view of the left camera, more of the right side of the teapot is darker, while from the point of view of the right camera, more of the left side of the teapot is darker. More obviously, the specular highlight has shifted, which is to be expected given the shift in cameras.

Shading Considerations

It is important to reemphasize that when using the multi-camera output functionality, shading only occurs once, from the world camera. In some cases, this may lead to image artifacts. For example, upon close examination of the previous pair of images, you may notice that the specular highlight on the right image is slightly off (it's too far to the left). In many cases, these artifacts may be acceptable; it varies upon how much view dependent shading goes on in your shaders.

To explore how we can write shaders that operate correctly given multi-camera outputs, we first exaggerate the effect of the view dependent artifacts in our example by changing the cameras so that they are pointed at 90 degrees away from each other:

##RenderMan RIB
Projection "perspective" "fov" [45]
Format 256 256 1.0
PixelSamples 5 5
ShadingRate 0.25
Translate 0 0 10
TransformBegin 
        Rotate 45.0 0 1 0
	Camera "right"
TransformEnd 
Rotate -45 0 1 0
Display "left.tif" "tiff" "rgba" 
DisplayChannel "float a"
DisplayChannel "color Ci"
Display "+right.tif" "tiff" "Ci,a" "quantize" [0 255 0 255] "string camera" ["right"]
FrameBegin 5
    TransformBegin 
        LightSource "shadowspot" "mylight" "float intensity" [250]
	    "point from" [0 3 -15]"point to" [0 0 0] "shadowname" "raytrace"
    TransformEnd 
    WorldBegin 
        AttributeBegin
	    Attribute "visibility" "string transmission" ["opaque"]
            Color [1 0.25 0.25]
            Surface "plastic" "float roughness" [0.05]"color specularcolor" [0 1 1]
            Translate 0 -1 0
            Rotate -90 1 0 0
            Geometry "teapot"
        AttributeEnd
        AttributeBegin
            Surface "texmap" "string texname" "ratGrid.tex"
            Scale 6 6 1
            Patch "bilinear" "P" [ -1 1 4 1 1 4 -1 -1 4 1 -1 4]
        AttributeEnd
    WorldEnd 
FrameEnd 
 
left.tif: correct   right.tif: incorrect diffuse and specular

Here, the picture from the right camera is obviously wrong. It looks like the diffuse contribution is cut off, and the specular highlight is in the wrong place. We can address these by taking a look at the shader used: plastic. The venerable plastic shader looks something like this by default:

surface
plastic( float Ks=.5, Kd=.5, Ka=1, roughness=.1; color specularcolor=1 )
{
    normal Nf = faceforward(normalize(N), I);
    vector V = -normalize(I);
    Oi = Os;
    Ci = Os * (Cs * (Ka*ambient() + Kd*diffuse(Nf)) +
	       specularcolor * Ks * specular(Nf,V,roughness));
}

The problem with the diffuse component stems from a view dependent calculation: the faceforward of the normal N. I should change based on the camera, but because shading only occurs for the world camera (in this example, the left camera), using faceforward with respect to I results in a problem for the other camera (the right camera). The following diagram illustrates the shading for a problematic point P. The usage of faceforward at P presents no problem for the left camera, because the point in question isn't visible from that viewpoint anyways; but it certainly presents a problem for the right camera which can actually see P. The use of faceforward with respect to the left camera's I causes the light source to be missed entirely in diffuse, causing the diffuse lighting artifacts shown.

We can amend this simply by avoiding the use of faceforward entirely. Whenever possible, this is a good idea anyways, although it does require that all geometry be correctly modelled with the normals facing outwards. This change results in the following version of plastic:

surface
stereoplastic( float Ks=.5, Kd=.5, Ka=1, roughness=.1; color specularcolor=1 )
{
    normal Nf = normalize(N);
    vector V = -normalize(I);
    Oi = Os;
    Ci = Os * (Cs * (Ka*ambient() + Kd*diffuse(Nf)) +
	       specularcolor * Ks * specular(Nf,V,roughness));
}
 
left.tif: correct   right.tif: incorrect specular

Even with this modification, the specular highlight is still obviously in the wrong place. It should be easy to understand why: there is still some view dependent shading, namely the specular component in the modified plastic shader still depends on the view dependent vector I. From the point of view of the right camera, this I is calculated from the left camera, so the vector V = -I being used in the specular shadeop is incorrect. In fact, this is the heart of the problem: any calculations which depend on I or E are view dependent and will be incorrect for multiple camera outputs, and those calculations will need to be performed for each camera.

Hence, since shading only occurs once, to compute the correct color for the "right" camera, we need to: compute the view dependent calculations which depend on the right camera; add to this any view independent calculations which can be shared between cameras; store the result in a new output color, which will be the desired color for the right camera. The first problem can be reduced down to computing the position for the "right" camera in current space. In order to do this, we can take advantage of the coordinate system implicitly saved by the RiCamera statement. This coordinate system has the same name as the camera (in this case, "right"), and we can use this to compute the "right" camera position in current space by transforming the point (0,0,0) from the "right" coordinate system to current space: Eright = transform("right", "current", point(0)).

With this computed, we can compute the incident camera ray direction vector Iright simply by subtracting Eright from P. It's this vector that we want to use instead of I for the view dependent specular shading for the "right" camera. Doing that modification in our plastic shader results in the following:

surface
stereoplastic( float Ks=.5, Kd=.5, Ka=1, roughness=.1; color specularcolor=1;
		output varying color rightCi = 0;)
{
    normal Nf = normalize(N);
    vector V = -normalize(I);
    Oi = Os;
    // Compute the view independent color
    color Cvi = Cs * (Ka*ambient() + Kd*diffuse(Nf));

    // Left camera
    Ci = Os * (Cvi + specularcolor * Ks * specular(Nf,V,roughness));

    // Right camera
    point Eright = transform("right", "current", point(0));
    vector Iright = P - Eright;
    vector Vright = -normalize(Iright);
    rightCi = Os * (Cvi + specularcolor * Ks * specular(Nf,Vright,roughness));
}

Then, because we want to distinguish the right camera's computed color from the left camera's computed color (still stored in Ci), we will require an arbitrary output variable. This requires an output color variable shader parameter (in this case, "rightCi"), as well as modifications to the RiDisplay line in the RIB file:

##RenderMan RIB
Projection "perspective" "fov" [45]
Format 256 256 1.0
PixelSamples 5 5
ShadingRate 0.25
Translate 0 0 10
TransformBegin 
    Rotate 45 0 1 0
    Camera "right" 
TransformEnd 
Rotate -45 0 1 0
Display "left.tif" "tiff" "rgba" 
DisplayChannel "float a"
DisplayChannel "color rightCi"
Display "+right.tif" "tiff" "rightCi,a" "quantize" [0 255 0 255] "string camera" ["right"]
FrameBegin 5
    TransformBegin 
        LightSource "shadowspot" "mylight" "float intensity" [250]
	    "point from" [0 3 -15]"point to" [0 0 0] "shadowname" "raytrace"
    TransformEnd 
    WorldBegin 
        AttributeBegin
	    Attribute "visibility" "string transmission" ["opaque"]
            Color [1 0.25 0.25]
            Surface "stereoplastic" "float roughness" [0.05]"color specularcolor" [0 1 1]
            Translate 0 -1 0
            Rotate -90 1 0 0
            Geometry "teapot"
	AttributeEnd
        AttributeBegin
            Surface "texmap" "string texname" "ratGrid.tex"
            Scale 6 6 1
            Patch "bilinear" "P" [ -1 1 4 1 1 4 -1 -1 4 1 -1 4]
        AttributeEnd
    WorldEnd 
FrameEnd 
 
left.tif: correct   right.tif: correct

We can see that finally the specular and diffuse shading on the teapot are correct. We're missing the color from the background plane however, and that's because that shader "texmap" wasn't modified to output the AOV "rightCi".

View dependent shading extends to other effects such as ray tracing. Consider a simple mirror shader:

surface
mirror(float Kr = 1)
{
    color Crefl = 0, hitc = 0;
    vector reflDir = reflect(I, N);
    gather("illuminance", P, reflDir, 0, 1, "surface:Ci", hitc) {
	Crefl += hitc;
    }
    Ci = 0.2 * Cs * diffuse(normalize(N)) + Kr * Crefl;
    Oi = 1;
}

If we apply this shader to a simple sphere reflecting a teapot, depending on the camera separation we may get incorrect reflections as illustrated below.

##RenderMan RIB
Projection "perspective" "fov" [45]
Format 256 256 1.0
PixelSamples 5 5
ShadingRate 0.25
Translate 0 0 5
TransformBegin 
    Rotate 45 0 1 0
    Camera "right" 
TransformEnd 
Rotate -45 0 1 0
Display "left.tif" "tiff" "rgba" 
DisplayChannel "float a"
DisplayChannel "color Ci"
Display "+right.tif" "tiff" "Ci,a" "quantize" [0 255 0 255] "string camera" ["right"]
FrameBegin 5
    TransformBegin 
        LightSource "shadowspot" "mylight" "float intensity" [250]
	    "point from" [0 3 -15]"point to" [0 0 0] "shadowname" "raytrace"
    TransformEnd 
    WorldBegin 
        AttributeBegin
	    Attribute "visibility" "int camera" [0] "int diffuse" [1] "int specular" [1]
            Color [1 0.25 0.25]
            Surface "defaultsurface"
            Translate 0 -1 -4
            Rotate -90 1 0 0
            Geometry "teapot"
	AttributeEnd
        AttributeBegin
            Surface "texmap" "string texname" "ratGrid.tex"
            Scale 6 6 1
            Patch "bilinear" "P" [ -1 1 4 -1 -1 4 1 1 4 1 -1 4]
        AttributeEnd
        AttributeBegin
	    Attribute "visibility" "string transmission" ["opaque"]
            Surface "mirror"
            Color [0.25 0.25 1]
            Sphere 1 -1 1 360
        AttributeEnd
    WorldEnd 
FrameEnd 
 
left.tif: correct reflection   right.tif: incorrect reflection

We can modify our mirror shader to separate out the view dependent calculations for the left and right camera. In this case, it requires two separate gather statements:

surface
stereomirror(float Kr = 1; output varying color rightCi = 0;)
{
    // View independent color calculations
    color Cvi = 0.2 * Cs * diffuse(normalize(N));

    // View dependent calculation for left camera
    color Crefl = 0, hitc = 0;
    vector reflDir = reflect(I, N);
    gather("illuminance", P, reflDir, 0, 1, "surface:Ci", hitc) {
	Crefl += hitc;
    }
    Ci = Cvi + Kr * Crefl;

    uniform float raydepth;
    rayinfo("depth", raydepth);
    if (raydepth == 0) {
        // View dependent calculation for right camera - these only
        // need to be performed for primary rays. If we don't
        // perform the raydepth check, we'll be throwing lots of
        // wasted secondary rays.
        point Eright = transform("right", "current", point(0));
        vector Iright = P - Eright;
        Crefl = 0;
        reflDir = reflect(Iright, N);
        gather("illuminance", P, reflDir, 0, 1, "surface:Ci", hitc) {
            Crefl += hitc;
        }
        rightCi = Cvi + Kr * Crefl;
    }
    Oi = 1;
}
 
left.tif: correct reflection   right.tif: correct reflection

To summarize: in multi-camera output rendering, because shading only occurs once from the viewpoint of the main camera, we can write efficient shaders that result in the correct picture by observing a few simple rules:

Efficiency of Multi-Camera Rendering

As mentioned previously, multi-camera rendering is generally faster than multiple pass rendering, because all geometry will only be shaded once for all cameras. Depending on the separation between cameras, however, memory requirements may go up - shaded results will be reused between cameras, and the renderer may need to hold on to those results for a longer duration. Also, unless a specific dicing camera is specified, the renderer must compute the worst case dicing rates for all cameras in order for the shaded results to be acceptable from all viewpoints - this means for a single viewpoint, more shaded points may end up being visible (and shaded). As a general rule, if no two objects are shared between any camera viewpoints, multi-camera output rendering will not be any more efficient than the equivalent multi-pass render.

A new XML statistic "cameraDetailDilation" has been added which allows one to estimate the effects of multi-camera rendering on the speed and memory requirements. It is based on geometry detail: the amount of pixels in raster space covered by geometry. The statistic is the ratio of the total geometry detail as computed from all cameras to the total geometry detail as computed from the main shading camera. In the XML report, it will look like this:

Geometry detail expansion from multiple cameras	1.01506		

In this example, the usage of multiple cameras meant that every piece of geometry in the scene now has a detail that is on average 15% higher. This will translate to increased lifetime for the geometry, which in turn leads to increased memory and increased render time due to reduced occlusion.

 

 

Pixar Animation Studios
(510) 752-3000 (voice)   (510) 752-3151 (fax)
Copyright£; 1996- Pixar. All rights reserved.
RenderMan® is a registered trademark of Pixar.