Shader space is used to define how 3D procedural shaders (and projections) are applied to objects in a scene. This contrasts to using the natural parameterization of a surface (often called ST) to map 2D textures onto objects. A 3D procedural shader emanates in three dimensions, like marbled halva. Such shaders are often called solid shaders. Naturally, a point in 3D space must be given to a solid shader as the center starting point from which the shader expands. To define this point we use coordinate systems.
Now any node in Maya, object, light, etc., can be used to declare a coordinate system for a procedural shader, but it turns out that the RISpec already contains a number of quite helpful predeclared shader spaces:
current 
This is the default coordinate system from which all points begin in, and is where all lighting calculations occur. The choice of "current" space may vary with rerenders. 
object 
This coordinate system is defined by the object(s) attached to the shader. Each object will declare its own coordinate space. 
shader 
The coordinate system active at the time that the shader was declared. 
world 
This uses the coordinate system active at WorldBegin. 
camera 
The coordinate system is define by the active camera, X points right, Y points up, and Z points into the scene. 
screen 
This is the perspectivecorrected coordinate system of the camera's image plane. Coordinate (0,0) in "screen" space is looking along Z into the scene. 
raster 
The 2D projected space of the final output image, with units of pixels. Coordinate (0,0) in "raster" space is the upperleft corner of the image, with X and Y increasing to the right and down, respectively. 
NDC 
Short for: Normalized Device Coordinates. This is like raster space but with one exception: it is normalized so that X and Y both run from 0 to 1 across the whole image, with (0,0) being at the upper left of image, and (1,1) being at the lower right (regardless of the actual aspect ratio). 
There are many cases where solid appearances are quite useful. Several common materials can only be modeled as solid appearances and it is often much easier for shader developers to build solid materials. When working with solid appearances the key is to be aware of what coordinate system each of your primitives lives in. This picture of three spheres is the result of placing the spheres in the scene, then scaling them to various sizes. Since the marble shader operates in the coordinate system of each primitive you can see that the scale of the marble itself varies  especially between the large and small spheres.
Often, this is exactly the desired effect. Sometimes, however, you want the marble to remain at a constant scale that is independant of any geometric transformations you apply. There are two ways around this behavior. The first relies on a common modeling tool  Zero transforms. This tool causes all the transformations you've associated with DAG nodes up the hierarchy from your primitive to be reset while simultaneously transforming the CVs of your primitive. After zeroing transforms on an object, the wireframe objects don't change at all, but the coordinate system in which your objects live has changed! This can have drastic implications on the final rendered image.
The second technique involves sharing a single coordinate system among all three balls. To accomplish this we declare in the shader a coordinate system the objects all share. We can enter this information in any Slim parameter that defines shader space. The name of the shader space parameter depends on who wrote the shader but is generally self evident, such as: Coordinate System, Space, or Shader Space. In Slim, you can enter the name of any predeclared shader space or the name of any MTOR coordinate system shape node into this field. MTOR will then establish the named coordinate system as the shader space for all objects attached to that appearance.
This technique can be very handy when using MTOR coordinate systems, since these allow us to visualize the associated shader transformation interactively. By applying the standard transformation tools to the transformation node of the MTOR coordinate system, you can accurately move, rotate, and scale your shader's coordinate system to visualize the solid appearance independent of the orientation of the objects to which it's attached.
In Computer Graphics there are two fundamental kinds of geometric operations: deformation and transformation. Deformations cause individual CVs to move with respect to one another. Transformations affect the entire coordinate system of a primitive. Solid appearances aren't able to sense geometric deformations. If you move or animate individual CVs on a primitive that has a solid appearance like marble attached, the effect will be as though the surface is swimming through the marble. Although this is a weird and interesting effect, it is usually undesirable. Fortunately the textures can be kept from swimming and made to stick on deforming objects through the use of reference geometry. The basic rule is: if you're animating an object through deformations you should try to use the object's natural surface parameterization (often called ST) to avoid using the more computationally expensive reference geometry associated with solid (or projected) shaders.
Pixar Animation Studios
