I have to mention now in the case of artists watching this that most of the maps here has not been properly artists tweaked in fact I am working with none final content. Besides this I have tampered around a bit with the maps to make the effects I am trying to show more obvious.

Alright last time there was a lot of dusty theory of light that we had to cover. This time we will get down with the more practical side of modeling the effects of the light and actually look at the visual effects. For these examples I will use a model from one of our projects at TheGameAssembly.

I have to mention now in the case of artists watching this that most of the maps here has not been properly artists tweaked in fact I am working with none final content. Besides this I have tampered around a bit with the maps to make the effects I am trying to show more obvious.

First out I will talk a bit about how most games resolve their material functions. And why this while is highly incorrect it actually works out fine. What most games do is that at the base of the model they have a colormap which tells us what colour it want to be displayed in that space. This neatly represents the material Functions for most stuff as what the colour maps tells is what colours are reflected by the surface at the point. What we do is that we simply multiply the RGB values of the total incoming light with the RGB values of the colormap to get the colour of the light that is reflected. And this actually works out all right despite being physically unsound what you should do is convert the light colour to a wave length do any combinations there and then convert back to RGB space. And this does create some artefacts but for most cases and users it will work out just fine so we will assume from here on that this is in fact a working approximation.

So let’s look at our model with only its colormap visible. I have selected 3 camera angles we will use consistently throughout these examples to highlight the differences between the different steps. These angles are selected because they show clearly the differences between the different lighting models.

So let’s start with the ambient lighting. Ambient light hits the object from all different sides so it hits all surfaces identically. However because different areas of the object has different material functions we will get different visual looks at different parts of the model. If we move to code terms it should look something like this.

*Color3f reflectedLight.Set(0,0,0);*

*// Color3f is simply a class containing the R,G and B Color Values*

*Color3f incomingLight.Set(0,0,0);*

*incomingLight+=Ambient.LightIntensity;*

*reflectedLight=MaterialFunc(incomingLight);*

*reflectedLight+=*

*Ambient.LightIntensity*colorMap*

*;*

We will return to ambient in a while to look at it more in depth but for the moment let's look at what our test model would look like with ambient light only.

You will notice it looks exactly like the colour map only darker. This is because the ambient light strength is the same from all directions so it has only one light intensity.

Ambient wasn't that exiting but it is necessary else the side of your model that is facing away from the light will become pitch black. But let's move on to diffuse lighting for now, before I get into that I will talk a bit about per pixel vs. per vertex lighting it's just a question of granularity in one case you calculate the light at the vertexes and interpolate inside the polygon with per pixel lighting you have an extra map that gives you surface data per pixel which means you can calculate things with a much greater visual fidelity without having to use too many polygons. For these lectures we will start with per vertex lighting because the effects are clearer with that and then change to per pixel during the later part of the lectures.

Let’s first go back to our basic formulas from the last lecture

*float irradiance=0;*

*float exitance=0;*

*for(int i=0;i*

*{*

*irradiance+=photon[i].Energy;*

*}*

*exitance=MaterialAbsorbationFunction(irradiance);*

*radiance=MaterialReflectionFunction( eyePosition,surfacePosition,exitance);*

Since we aren't actually tracing photons we can simplify this quite a bit. We can also for a moment assume that we will take care of the surface reflection function at another time so that exitance is the light that hits us.

*float irradiance=0;*

*float*

*exitance*

*=0;*

*for(int i=0;i*

*{*

*irradiance+=Light[i].*

*lightIntensity;*

*}*

*exitance=*

*MaterialAbsorbationFunction(irradiance)*

*);*

Lightintensity is the value of the light that actually reaches the surface. So we have a simple formula for that one.

*LightIntensity=FalloffFunction(lightStrength,DistanceToLight).*

Light looses its strength while travelling through the air partly because the photons loose their energy and partly because they collide with different particles in the air that stops them, but also because since the photons are travelling in different directions they will get further and further away from each other. I will not go into detail here but it suffices to say that the approximation most used for physical light is the following.

*LightIntensity=lightStrength/(DistanceToLight*DistanceToLight)*

For games however this is sometimes impractical because it doesn't have a clear distance that it ends at so for booth games and rendered films we often use our own falloff Functions with the only real criteria being that is looks good.

But since we aren't working with a physics simulation we would like to start looking at this as calculating pixel colours instead of radiance etc because we are going to be working in pixels in the end. And the code will look extremely similar anyway.

*Color3f pixelColor(0,0,0);*

*for(int i=0;i*

*{*

*pixelColor*

*+=*

*LightReflectionFunction(*

*MaterialAbsorbationFunction(*

*Light[i].*

*lightIntensity));*

*}*

That’s it. We already know the MaterialAbsorbation function it's just multiplying with the colormap but what does the lightReflection function do ? Well it tries to calculate how much of the light reflected from this area that actually hits your eyes. And it's what we are going to work on now when we go on to diffuse lighting.

In physics you might have learned that when and ray hits a surface it is reflected around that surface normal so that the exit vector has the same angle to it as the incoming angle (I have mentioned this quickly before). So if you look at it like this it should only be about calculating from what direction incoming light needs to hit the surface for it to hit your eye. The problem is that the surface isn't perfectly smooth (well a mirror is pretty close but not perfect) this means that your rays will hit the surface with different normals and those will be reflected in different angles. Obviously more parts of the surfaces will be facing in the surfaces general direction so it will become lighter there. But even on a part of the surface that starts turning away from you there will be rays that bounce towards your eyes,

You could say that diffuse lighting models the light beams that does not reflect perfectly straight into your eyes but are instead diffused by hitting a surface and being spread in different directions or being diffused by the small particles in the air. Modelling all of this is again pretty much impossible so what we do is that we let the distance function take care of the part when the photons are travelling through the air.

We also assume that all materials have around the same molecular structure so that we can use the same code to decide how light is bounced. The basic idea is that the less the surface is facing towards you the fewer rays will bounce on it and hit your eyes therefore it gets darker. And if it's facing more than 90 degrees away from you won't see anything because the light will be hitting the other side of the surface. It is generally assumed that the amount of light that hits your eye is proportional to the cosine of the angle difference between the surface normal and the reflected light vector, if the difference is 0 the light would be hitting the surface from straight on and reflects straight back. The larger the difference the more of an angle would be between the surface and the light source.

If we try to make it into code it would look like this

*PixelIntensity=Lightsource.*

*lightIntensity*

**cos(AngleToReflectionVector)*

This could be written as

*PixelIntensity=Lightsource.*

*lightIntensity*

**cos(acos(dot(surfacenormal,lightDirection)))*

A quick mathematical proof of the exchange

*cos(a)=dot(v1,v2)*

This should be familiar if it isn’t the dot product between two normalized vectors produces the value cosine returns if you input the angle between them. So the following steps go smoothly.

*cos(a)=dot(v1,v2)*

*acos(cos(a))=acos(dot(v1,v2))*

*a=acos(v1,v2)*

Which is just what we did we replace an angle with the acos between the 2 vectors that produces the angle. So now let’s clean that formula up a bit.

*PixelIntensity=Lightsource.*

*lightIntensity*

**(dot(surfacenormal,lightDirection))*

This gives us the final formula

*pixelColor=MaterialFunction(*

*Lightsource.*

*lightIntensity*

**(dot(surfacenormal,lightDirection)))*

Remember that lightIntensity also needs to be calculated we just leave that out of the formulas to keep them clean. So there we have the contribution of the diffuse part.

For the first time we are seeing some definitions of form on the object. If you feel that this doesn't look that good it's partly because we are still doing all our work for each vertex and partly because we have just gotten started on what we want to include.

Specular is the third term of our lighting system. I said to simply things that diffuse light represents the light that are diffused on the surface before hitting your eye so that you don't get a straight reflected ray hitting your eye. This is all a matter of definition since the rays consists of photons and it's the amount of photons hitting your eye that determines how light it looks so there is no diffuse or specular light just different amount of photons. But again we are using approximations here so for now we consider specular light to be the rays of light that perfectly reflects on the surface and directly hits your eyes, this means that the specular doesn't need to be in the area that has the brightest diffuse light. So let’s consider them as undiffused rays of light. As they hits directly in the eye they are not only based on direction of the light and the normal of the surface, but also on the direction between the viewer and the surface. If the angle between the vector of the viewer to the surface and the surfaces normal are the same as the angle between the lightsourceDirection and the surface normal we will have a perfect reflection.

Now how much of this you see depends on the material you are watching some material like steel and plastic has a very clear specular while paper normally doesn't but for the moment we will assume that all materials have a full specular and later look at how to compensate for the material differences.

Depending on the surface the specular might be large or small as a rule materials with small unevenness will have rays over a bigger part that directly hits they eye but they will also generally have less of a specular component because since they are more uneven more light will also be reflected away from the viewers eyes.

So let’s look at how to calculate all of this because preferring two dot products for each angle and then trying to compare the results is quite expensive most of the time we look for an easier solution. The idea is as follow if the combined vector of the two vectors is calculate and then normalized you will get a vector that is the vector of a surface that would produce a perfect reflection and then you can consider how much this vector differs from the surfaces normal to see how close we are to a perfect reflection. This vector is normally called half vector because it's halfway between the two original vectors. So the half vector will be representing the vector that light should have been reflected about.

*halfVector=normalize(lightvector+eyeToSurfaceVector);*

*reflectionDifference=dot(surfaceNormal,halfVector)*

So now we have the value of acos(difference between eyevector and perfect reflection vector) now we just need a factor to compensate for how far the perfect reflection spreads so that it doesn't occupy the entire material.

*specucularStrength=*

*pow(reflectionDifference,SurfaceConstant)*

*pixelColor+=lightSource.LightIntensity**

*specucularStrength;*

Observe that we just added the specular component straight off I am a bit shaky on the physics but I believe that because it is a perfect reflection the intensity of it is so strong that it isn't colored by the material it hits. However notice the

*SurfaceConstant*part of the code that’s the factor that determines the materials specular reflective properties somewhere between 8 and 32 are normally sane values. But this is all for the artist to play around with to simulate different materials.

Since we can't handle models with to high vertex counts in real-time on current hardware we have to make some kind of cheat to get in extra detail. What is used for this is a bump map. A bump map is a map that tells the rendering code about how the surface changes within a polygon. It's just like the colormap but it contains information on surface deformations instead. The most commonly used type of bump map today is called normal mapping it means that the bump map contains the actual normals of the surface for that point (or in tangent space bump mapping a modification to the normals at that point). This allows us to simulate a lot more detail as long as we accept that the bumps we see are always flat to the polygonal surface, they can't add details to the outline only inside the surfaces.

The most common way to generate a normal map is to create a highly detailed version of the object and one low detail version and project the details from the high detail version into the normal map for the low detail version. This way you can create an object that looks like it has millions of polygons without the cost for them. So as a final step lets look at our model using per pixel lighting that uses the normals from the normal map instead of the normals of the vertexes.

Quite a difference don't you agree ? The specular really sticks out now and makes it look plastic however but we will look in the next instance how to fix it and also how to take it all to the next level visually.

## No comments:

## Post a Comment