What is the difference between lighting and shading




















For this matter, a number of different types of light sources exist to provide customization for the shading of objects. Shading is also depend on lighting. An ambient light source represents a fixed-intensity and fixed-color light source that affects all objects in the scene equally. Upon rendering, all objects in the scene are brightened with the specified intensity and color.

This type of light source is mainly used to provide the scene with a basic view of the different objects in it. A directional light source illuminates all objects equally from a given direction, like an area light of infinite size and infinite distance from the scene; there is shading, but cannot be any distance falloff. Area, originates from a single plane and illuminates all objects in a given direction beginning from that plane. Volume, an enclosed space lighting objects within that space Shading is interpolated based on how the angle of these light sources reach the objects within a scene.

Of course, these light sources can be and often are combined in a scene. The renderer then interpolates how these lights must be combined, and produces a 2d image to be displayed on the screen accordingly. Theoretically, two surfaces which are parallel, are illuminated the same amount from a distant light source, such as the sun.

Even though one surface is further away, your eye sees more of it in the same space, so the illumination appears the same. Notice in the first image that the color on the front faces of the two boxes is exactly the same.

It appears that there is a slight difference where the two faces meet, but this is an optical illusion because of the vertical edge below where the two faces meet. Notice in the second image that the surfaces on the boxes are bright on the front box and darker on the back box. Also the floor goes from light to dark as it gets farther away. This distance falloff effect produces images which appear more realistic without having to add additional lights to achieve the same effect. Flat shading is a lighting technique used in 3D computer graphics to shade each polygon of an object based on the angle between the polygon's surface normal and the direction of the light source, their respective colors and the intensity of the light source.

It is usually used for high speed rendering where more advanced shading techniques are too computationally expensive. As a result of flat shading all of the polygon's vertices are colored with one color, allowing differentiation between adjacent polygons. Specular highlights are rendered poorly with flat shading: If there happens to be a large specular component at the representative vertex, that brightness is drawn uniformly over the entire face.

Consequently, the specular reflection component is usually not included in flat shading computation. Smooth shading of a polygon displays the points in a polygon with smoothly-changing colors across the surface of the polygon. This requires you to define a separate color for each vertex of your polygon, because the smooth color change is computed by interpolating the vertex colors across the interior of the triangle with the standard kind of interpolation we saw in the graphics pipeline discussion.

Computing the color for each vertex is done with the usual computation of a standard lighting model, but in order to compute the color for each vertex separately you must define a separate normal vector for each vertex of the polygon. This allows the color of the vertex to be determined by the lighting model that includes this unique normal.

Phong shading, is similar to Gouraud shading except that the Normals are interpolated. Thus, the specular highlights are computed much more precisely than in the Gouraud shading model:.

Chat WhatsApp. Rendered image of a box. This was invented as an improvement to allow for more smooth transitions of the color on round objects.

Main idea is that there is a different normal per vertex and the color is calculated in the vertex shader. That color is then interpolated over the polygon. Because there are less vertices then there are fragments, then calculating the color per vertex and interpolating it, is more efficient than calculating it per fragment.

This approach handles badly materials that have a specular reflection. This reflection might occur inside the polygon, but not on any of the vertices. Thus this shading would not show it, if it does not happen to be on the vertex. This was another improvement in order to account for the specular reflection. Main idea is that the normal from the vertices is interpolated. Color is calculated per fragment, taking into account the interpolated normal.

For an approximation of a sphere, this is quite ideal, because the interpolated normals would be exactly those that a perfect sphere would have. Because the color is calculated based on the normals, it will be calculated as if it were a perfect sphere.

On the right there is an example with all three shadings. Those shapes have only the diffuse reflection no specular. As you can see there are still quite noticeable differences with the sphere. For the cube, there are actually 24 vertices. Each face has its own 4 with normals pointing to the same direction as the surface normal.

In that case there is no difference as far as the result is concerned which shading is used. Next we will look how is the actual color determined. It does not matter which shading is used as long as we have a surface normal, material properties, light source properties and the light direction. Before we look at the actual reflection of light from a surface, let us define a type of light source.

Directional light source is a light source, that defines the light coming from a single direction. This is an approximation of reality, because it would assume an existence of an infinitely wide and tall object that emits light. Still, we are quite used to think of sunlight as directional light. The Sun is so big compared to earth, that in a visible area the angles of rays do not differ a noticeable amount. Now, we are given a direction from where the light is coming from and some sort of a surface.

Diffuse surfaces have the property to diffuse reflected light around. Light will enter the surface, bounce around there and then exit in a random direction. Or you can think that an atom will absorb the photon and after some time emits another photon to a different direction. Because of the diffusion of light, it does not matter at which angle we look at the material.

All that matters is on which angle the light actually reaches the material. This is because the amount of light reaching one surface unit will be higher if the light is coming from a more perpendicular angle. If the light is coming from a grazing angle, then the same amount of light will cover a much larger area and thus a surface unit will receive less light. As you can see, the amount of light received and thus reflected is directly related to the angle between the surface and the light direction.

This is the place where will need a surface normal. The cosine of the angle between the surface normal and direction towards light will directly give us the illumination percentage. Now, does this mean that we would have to calculate the cosine in the fragment shader? Would that not be quite slow?

Luckily we can calculate the cosine in another way. As you remember the dot product was geometrically defined as:. If we are dealing with normalized vectors, then we can directly find out the cosine between them by just taking the algebraic dot product. So, we have to make sure our surface normal and the light source direction are normalized. Another think to notice is, that because the cosine is negative for larger angles, we want to use only the positive values of the cosine. Otherwise we would subtract the light, if the light source is on the other side of the material.

This becomes even more important when we add other terms to the equation. This is the main idea behind the Lambert lighting model that models the diffuse reflection of light. Now, in order to determine the actual color, we need to know two things:. In computer graphics we usually define our colors by three channels: red, green and blue.

So we will have 3 terms for each of those channels for both the light source and the surface material. The final color computed in the fragment shader would be like this:. In reality there are very few surfaces that are almost completely diffusely reflective. Examples would include the surface of the Moon, chalk, matte paper. The cubes and spheres shown before kind of appear to be in space.

The non-illuminated sides of it are totally in darkness. In reality this is not the case, there is almost always some light coming from every direction. That is because light will reflect from nearby surfaces, bounce around and reaches the area not directly illuminated. This is called indirect illumination and is one of the things that global illumination techniques try to do accurately, which may be computationally quite expensive.

In our simple model we can do a very rough approximation and just add an ambient term to the reflected intensity. Directional lights are faster than point lights because L' does not need to be recomputed for each polygon. It is rare that we have an object in the real world illuminated only by a single light.

Even on a dark night there is a some ambient light. To make sure all sides of an object get at least a little light we add some ambient light to the point or directional light:. Currently there is no distinction made between an object close to a point light and an object far away from that light. Only the angle has been used so far. It helps to introduce a term based on distance from the light.

So we add in a light source attenuation factor: Fatt. It can take a fair amount of time to balance all the various types of lights in a scene to give the desired effect just as it takes a fair amount of time in real life to set up proper lighting. These properties describe how light is reflected off the surface of the polygon.

If this red polygon is hit with a white light it will appear red. If it with a blue light, or a green light, or an aqua light it will appear black as those lights have no red component.

If it is hit with a yellow light or a purple light it will appear red as the polygon will reflect the red component of the light. One important thing to note about all of the above equations is that each object is dealt with separately. That is, one object does not block light from reaching another object. The creation of realistic shadows is quite expensive if done right, and is a currently active area of research in computer graphics.

Consider, for example, a plant with many leaves, each of which could cast shadows on other leaves or on the other nearby objects, and then further consider the leaves fluttering in the breeze and lit by diffuse or unusual light sources. We often use polygons to simulate curved surfaces.

If these cases we want the colours of the polygons to flow smoothly into each other. Given a single normal to the plane the lighting equations and the material properties are used to generate a single colour. The polygon is filled with that colour.



0コメント

  • 1000 / 1000