Milkshape Modeling Tutorial for Return to Castle Wolfenstein Part VII -Shaders
Understanding How Shaders Work
Before anyone attempts to use shaders on a model you need to know what they are and understand a little bit about them, especially the blending functions, so that you’ll get the results that you want. Now I would like to add a disclaimer here that I’m not a shader guru and there may, in fact, be mistakes here. If there are please let me know and I will change them, however, for the most part this will suffice.
A shader is separated into 2 parts – the surface properties and the rendering stages. Let’s look at a shader skeleton. Here ya go:
|rendering stage 1|
|rendering stage 2|
So, when a surface in the game world has a texture with a shader associated with it, that surface adopts the surface properties listed in the shader, and the texture on that surface is displayed according to the rendering stages. Common surface properties are things like light emission, deformations of the vertex points, and special properties like fog and water, etc. These are all now physical properties of the surface which this texture/shader combo has been mapped to. Not all the surface properties seem to work on models so they are not so important when your dealing with model shaders.
Then you have the rendering stages. This tells the q3 engine how to display the surface in the game world. In the above stages, you see that the computer makes a first pass at rendering the texture and the then does a second pass.
Here’s a surface in the game world:
It has the following shader:
First the lightmap is drawn on top of the surface in the first rendering. Any blending or manipulation is also done.
Then the texture is drawn on top of the surface with the lightmap on it in the second rendering. Any blending or manipulation of this layer is also done.
The shader name must be 64 characters or less and is usually the name of the path to the texture + the texture name.
There are a number of surface properties. Among them are:
There are also a number of stage rendering properties, such as:
A Few Surface Properties
There are a few surface properties that are important to modeling. The most important, of course, is culling.
Culling has really 3 parameters. Front, back, and none. Whichever side you tell the computer to cull it, will remove from the game. Front removes all the gray-sided faces from your model. Back removes all the backfaces, the dark triangles in Milkshape (this is the default). None means to allow both front and backfaces to be visible.
Nopicmip and nomipmap. Alright, nopicmip will force people who have set their graphics resolutions very low to display this texture as a high quality texture anyway. This is useful to prevent bluring of high detail textures.
No mipmap. The computer generates scaled down images of your .tga file to display, depending on your distance from the texture. So if your way out there and look at the texure you really are seeing a scaled down version of the 256×256 texture, maybe a 64×64 version of the texture. These are called mipmaps. ‘No mipmaps’ disables this. Probably useful for viewing high quality textures at a distance.
PolygonOffset will offset the texture a bit above the surface. This can be used in a second ‘detail’ shader to add detail to the first.
DeformVertexs will work on models. Cool huh. There are a few options:
- Wave – This does weird stuff on a model. It sorta separates the faces of the model in a wave. Like a beating heart with all the faces opening and closing.
- Move – This is very cool. The model stays in place but the shader moves away. Unfortunately this is based on a wave function so your ‘flying shader’ eventually comes back to roost. It makes it look like the model is moving, when only the textures are.
- Bulge – Creepy bulging on all faces of the model. When the bulges are large you get some wacky *.
- Normal – Couldn’t get this to work on the model surfaces.
- Autosprite – for God’s sake DON’T USE THIS on a model. (under most circumstances)
- Autosprite2 – see above note.
Sort does work on model textures. Some of the interesting ones are nearest, which makes the model draw itself infront of everything in view, including your own gunfire. Underwater will draw the model behind transparent textures like water.
q3map_surfaceLight works on misc_models, however it doesn’t seem to work on misc_gamemodels and script_model_meds. This is because the light needs to be done statical by the q3 engine. Careful though! a light emitting texture on a complex model could add a lot of time to the light compile stage.
q3map_lightImage and q3map_lightSubdivide give some more control over the model’s light emmitting textures.
qer_editorimage gives the Radiant editor an icon to display for the texture. Not really necessary for model textures.
Surfaceparms don’t appear to have any effect on models.
The Important Stuff
Let’s look at the stage rendering properties in closer detail.
The rendering stage properties can be divided into 6 groups.
- Source maps – map, animap, clampmap
- This group finds the source maps used in texturing the surface
- Blending RGBA – blendFunc
- This function helps in mixing colors
- Discarding RGBA – alphaFunc
- The function discards information from the source maps
- Modifying RGBA – alphaGen, rgbGen
- This group modifys existing color or transparency data
- Z-buffering – depthFunc, depthWrite
- This group changes the Z-buffer
- Texture Coordinates – tcGen, tcMod
- This group plays with the UV map
Let’s take a close look at each of these in turn:
The source map group. There are three properties here. The first is the keyword map. Map finds the source map which will be used by that rendering stage. There can be one source map per rendering stage (as far as I know). So for instance you can use the source map of ‘map textures/models/mytexture.tga’ in the first rendering stage and then in the second rendering stage use a different texture.
You can also use ‘$lightmap’ and ‘$whiteimage’, instead of a pathname to a texture. ‘$lightmap’ will use the engine generated lightmap as a texture on the surface. What’s a lightmap? A lightmap is a black and white map of the world corresponding to the amount of light hitting each surface in the game. This map can be used instead of a texture in a rendering pass. Also you can use the keywork ‘$whiteimage’. This is most likely what it says, a white image. It is used for specular lighting on a model. What’s specular lighting.
Well instead of trying to explain it, fire up Milkshape and add specular lighting to your model. Open up a new model and get rid of any Materials associated with the groups. Now you should have no materials. Click the ‘New’ button at the bottom of the Materials tab. A new texture ball appears. Now click the ‘Specular’ button and select the color white (like a $whiteimage). Click the other 3 buttons, diffuse, ambient and emmisive and select the color black. Now do a Select All from the menu and ‘Assign’ all the faces to the new material. Your model should look like shiny carbon paper. The white that is making the model flashy is called specular lighting. Notice that it changes when you move the model around. Try changing specular lighting to black now.
I haven’t recieved good results yet with $whiteimage, however.
Animap plays a sequence of targa files in a row so that it looks like a little filmstrip. An animap can have 8 targa files that repeat.
Clampmap clamps the edges of the source texture to the surface. This is usually used in combination with deformvertexes. To use this you would say “clampmap pathtotexture/texture.extension”.
O.K. now for blendFunc. It’s important to understand what this function does. BlendFunc, of course, blends color and transparency information. How does this work?
First of all vocabulary time. RGBA means red, green, blue, alpha. There is one of these values at each pixel on your texture. The red, green, and blue make up the color information and the alpha makes up the transparency information. These values are stored in 8-bit channels when you save the texture file. That’s why textures with transparency are 32-bit targa files and those without such information are 24-bit targa files.
When the rendering stage blends RGBA information it takes information from the source RGBA, or what you have selected as the source texture, and information from the destination, or RGBA information that was rendered previously on this surface, and then combines the two. The equation for doing this is:
Final RGBA info for this pixel = [SourceFunction(Source RGBA) + DestinationFunction(Destination RGBA)]
The source and destination RGBA values can be modified before being blending to give different effects.
The functions that modify the RGBA information are:
- X = [R,G,B,A] * [1,1,1,1]
- X = [R,G,B,A] * [0,0,0,0]
- X = [R,G,B,A] * [R,G,B,A]
- X = [R,G,B,A] * [1-R,1-G,1-B,1-A]
- X = [R,G,B,A] * [A,A,A,A]
- X = [R,G,B,A] * [1-A,1-A,1-A,1-A]
All the Red, Green, Blue and Alpha values are converted to a scale of 1.0 – 0.0 for this math. Normally the color black is represented as the RGB value of [0,0,0] and white as [255,255,255]. You can convert this to a 1.0 – 0.0 scale by dividing each number by 255. Therefore [1,1,1] equals white and [0,0,0] equal black. The alpha channel is a grayscale black and white image. White is opaque and black is transparent in the alpha channel (usually). So the A in RGBA will be a number 1-0 reflecting it’s degree of transparency. So, [1,1,1,1] is a completely white, opaque texture, while [0,0,0,0] is a completely black transparent texture.
Lets do some math:
We’ll take the texture Blue with an all white alpha channel (meaning it’s opaque) with the RGBA value of [70,146,207,255].We’ll convert this to a 1.0 -0.0 scale by dividing by 255. We get [.275, .573, .812, 1]. Here’s what we get it we run this through a few functions here:
- GL_ONE -> [.275, .573, .812, 1] * [1, 1, 1, 1] = [.275, .573, .812, 1] (the same color)
- GL_ZERO -> [.275, .573, .812, 1] * [0, 0, 0, 0] = [0, 0, 0, 0] (transparent black)
- GL_SRC_COLOR -> [.275, .573, .812, 1] * [.275, .573, .812, 1] = [.076, .328, .659, 1] (converts to [19, 84, 168, 255] which is a much darker shade of blue)
- GL_ONE_MINUS_SRC_COLOR -> [.275, .573, .812, 1] * [.725, .427, .188, 0] = [.199, .245, .153, 0] (converts to [51, 63, 39, 0] which is a transparent dark green)
- GL_SRC_ALPHA -> [.275, .573, .812, 1] * [1, 1, 1, 1] = [.275, .573, .812, 1] (the same color)
- GL_ONE_MINUS_SRC_ALPHA -> [.275, .573, .812, 1] * [0, 0, 0, 0] = [0, 0, 0, 0] (transparent black)
This was just to get an idea of what’s going on. Remember, however, that there is a source and a destination. Let’s look at some more complex (and useful) ones:
blendFunc GL_ONE GL_ONE.
Well let’s see. This would mean X =( [src_RGBA * 1,1,1,1] + [dest_RGBA * 1,1,1,1]].
Imagine that Source is a yellow color [.973, 1 , .126, 1] (or 248, 255, 32, 255) and destination is a blue color that was rendered previously [.126, .180, 1, 1] (or 32, 46, 255, 255). Let’s do the math. X =(( [.973, 1 , .126, 1] * 1,1,1,1) +( [.126, .180, 1, 1] * 1,1,1,1)) = [1, 1, 1, 1] (you can’t go over 1). This means we will get a white texture. Try it if you’d like:
|blendFunc GL_ONE GL_ONE|
This is what’s called an additive blend, because, if you noticed, we added the two colors together. There is a shorthand for this. Use “blendFunc add”.
How about this one:
blendFunc GL_DST_COLOR GL_ZERO
This would mean: X =( [src_RGBA * dest_RGBA] + [dest_RGBA * 0,0,0,0]].
Let’s do the math using the yellow and blue as before:
X =(( [.973, 1 , .126, 1] * [.126, .180, 1, 1]) +( [.126, .180, 1, 1] * 0,0,0,0)) = [.123, .180, .126, 1] (or 31, 46, 36, 255). This is a nice dark green color.
The yellow texture in this example is acting as a filter for the blue .tga texture. Like putting a colored lens on a camera. In fact there is a shorthand for this. Use “blendFunc filter”. blendfunc GL_ZERO GL_SRC_COLOR will do the same thing.
Let’s look at one last example:
blendFunc GL_SRC_ALPHA GL_ONE_MINUS_SRC_ALPHA.
This would mean: X =( [src_RGBA * (src_A, src_A, src_A, src_A)] + [dest_RGBA * (1-src_A, 1-src_A, 1-src_A, 1-src_A)]].
First of all for this to work properly you should put the texture with the transparency as the source (the 2nd rendering) and not the destination here. Let’s look. Notice that if the source’s alpha channel is white (i.e. opaque or 1) then the source is displayed but the destination is not. (because the destination is being multiplied by zero). This means the source overlays the destination colors.
But….. what happens if the source is transparent (i.e. the alpha channel is 0). Now the source is hidden and the destination is displayed. Ah HA! I hear you say. This is the blend for a transparency. There is a shorthand for this blend. Use “blendFunc blend”.
Now what about alphaFunc. This is also known as alpha testing. That’s because this function tests the values in the alpha channel and discards any pixel information that doesn’t fit the criteria. There are 3 parameters for alphaFunc.
- GT0 – greater than zero
- LT128 – less than 128
- GE128 – greater than or equal to 128
Previously we had been looking at the alpha channel as a scaled number from 1.0 – 0.0. Here we’re using 0-255. 0 being black and 255 being white.
Let’s say that you have a 32-bit targa file with an alpha channel. In the alpha channel you make everything that you want to be invisible, black. Then you would say “alphaFunc GT0”. The engine looks at the alpha channel and discards any RGB information that is not associated with an alpha value greater than zero in the alpha channel. (i.e. everything that’s black).
You should use alphaFunc with depthWrite to get this to work.
Now we’ll turn to alphaGen and rbgGen. These have the same parameter. The only difference is that alphaGen is using the alpha channel value and rgbGen is using the RGB values. This can produce interesting results.
The parameters for these are the following:
- identity – this means “leave me alone”. This is the default if nothing is specified. It multiplies the RGB or A values by 1.0.
- identityLighting – similiar to identity (not sure of the difference)
- entity – Supposedly mulitplies the RGB or A value by the lighting on the entity. This includes both dynamic lighting and lightmap lighting. I haven’t yet recieved good results with this.
- oneMinusEntity – gives you the inverse of entity.
- vertex – This is the appropriate lighting for misc_models. This will map shadows onto the model. It mulitplies the RGB or A value by a static lighting value (vertex lighting). Remember the keyword is ‘multiplies’. When you mulitply by anything less than 1 (white light), your getting a darker RGB value. The A value drops if your using alphaGen. This means transparencies will become more transparent in shadows if they’re on a gradiant. This is not appropriate for misc_gamemodels or script_model_meds, because they’re not vertex lit (i think).
- oneMinusVertex – gives you the inverse of vertex.
- lightingDiffuse – lighting for misc_gamemodels and script_model_meds. Uses the vertex normals (a normalized vector) to light the model. Not for use with misc_models.
- wave – pulsates the darkness/lightness of the texture.
DepthFunc and depthWrite have to do with something called a Z-buffer.
Alright, I’m not a 3D graphics designer but here’s my explaination of a Z-buffer (from my small understanding). Graphics cards hold ‘depth’ information about each pixel on a texture. The object that is closer to you gets displayed. A good example is when your mother stood in front of the TV when your where younger. You mother now has a higher Z-buffer priority and so gets displayed. Typically when rendering a shader, all the passes are made to make the final drawing of the texture and then this is all written to the mysterious z-buffer.
We can supress this final rendering z-buffer behavior by the command depthWrite. By doing this we force the current rendering to be written to the z-buffer. Usually this command follows the command alphaFunc.
Remember that alphaFunc discards color info from the texture effectively creating ‘holes’ or transparencies. When there are no more renderings of the texture, your good to go. Just slap that baby on the surface. However, if you need to add another rendering, such as blending a lightmap, you’ll need to tell the computer to leave those holes alone. You do this by telling the engine to write the texture with holes to the z-buffer before finishing the rendering, using the command depthWrite. Then in the next rendering pass when you blend the next texture you add the command “depthFunc equal”. This tells the engine to leave the holes alone. I think this works because alphaFunc sends those RGB pixels into the yonder game world, z-bufferly speaking, and so when blending occurs it occurs only on those pixels with an equivalent z-buffer currently mapped to the surface. This is conjucture on my part, however.
The last rendering thing to talk about is tcGen and tcMod.
These are TextureCoordinate modifications. TcGen has three parameters which are:
- environment – this is for envionmental mapping (very cool stuff).
- lightmap – get texture coordinates from the lightmap?? (looks a little like entity lighting to me i’m not sure) I think the new q3 map will support this feature.Cus now it looks like crap.
- base – gets texture coordinates from the UV map. I would guess this is a default.
TcMod allows you to move the texture around on the surface of the model. Your options are:
- rotate – this, of course, makes the texture spin around. (getting dizzy……..)
- scale – this scales the texture to a certain size.
- scroll – this scrolls the texture along the model
- stretch – sorta pulsates the texture in a waveform.
- transform – your guess is as good as mine.
- turb – adds washing machine turbulance
Well that’s about it folks. Class dismissed.