Here i will put the explanations of some basic terms about opengl basics that you will always encounter while reading any opengl blog or tutorial.
The points themselves are called vertices (or vertex in the singular), and they are moved around in space with a convenient mathematical construct called a transformation matrix. Another matrix, a projection matrix takes care of the mathematics necessary to turn our 3D coordinates into two-dimensional screen coordinates, where the final line drawing actually takes place.
The actual drawing, or filling in of the pixels between each vertex to make the lines is called rasterization. We can further clarify our 3D intent with transformed and rasterized lines by employing hidden surface removal.
By varying the color values across the surface (between vertices), we can easily create the effect of a light shining on a red cube. Shading the surface creates the illusion of light. Lighting and shading are very large areas of specialty in the field of 3D graphics, and there are entire books written on this subject alone! Shaders on the other hand are individual programs that execute on the graphics hardware to process vertices and perform rasterization tasks.
The next hardware advance was texture mapping. A texture is simply a picture that we map to the surface of a triangle or polygon. Textures are fast and efficient on modern hardware, and a single texture can reproduce a surface that might take thousands or even millions of triangles to represent otherwise.
Blending allows us to mix different colors together. This reflection effect is done simply by drawing the cube upside down first. Then we draw the floor blended over the top of it, followed by the right side up cube. You really are seeing “through” the floor to the inverted cube below. Your brain just says, “Oh… a reflection.” Blending is also how we make things look transparent.
You are mostly concerned with two main types of projections in OpenGL. The first is called an orthographic, or parallel, projection. You use this projection by specifying a square or rectangular viewing volume. Anything outside this volume is not drawn. Furthermore, all objects that have the same dimensions appear the same size, regardless of whether they are far away or nearby. This type of projection is most often used in architectural design, computer-aided design (CAD), or 2D graphs. Frequently, you also use an orthographic projection to add text or 2D overlays on top of your 3D graphic scenes. You specify the viewing volume in an orthographic projection by specifying the far, near, left, right, top, and bottom clipping planes. Objects and figures that you place within this viewing volume are then projected (taking into account their orientation) to a 2D image that appears on your screen.
The second and more common projection is the perspective projection. This projection adds the effect that distant objects appear smaller than nearby objects. The viewing volume is something like a pyramid with the top shaved off. The remaining shape is called the frustum. Objects nearer to the front of the viewing volume appear close to their original size, but objects near the back of the volume shrink as they are projected to the front of the volume. This type of projection gives the most realism for simulation and 3D animation.
THE OPENGL STATE MACHINE
Drawing 3D graphics is a complicated affair. In the chapters ahead, we cover many OpenGL functions. For a given piece of geometry, many things can affect how it is drawn. Is the object blended with the background? Are we performing front or back face culling? What, if any, texture is currently bound? The list could go on and on. We call this collection of variables the state of the pipeline. A state machine is an abstract model of a collection of state variables, all of which can have various values or just be turned on or off and so on. It simply is not practical to specify all the state variables whenever we try to draw something in OpenGL. Instead, OpenGL employs a state model or state machine to keep track of all these OpenGL state variables. When a state value is set, it remains set until some other function changes it. Many states are simply on or off. Depth testing for example is either turned on or turned off. Geometry drawn with depth testing turned on is checked to make sure it is in front of any objects behind it before being rendered. Any geometry drawn after depth testing is turned back off (a 2D overlay for example) is then drawn without the depth comparison.
By default, every point, line, or triangle you render is rasterized on-screen and in the order in which you specify when you assemble the primitive batch. This can sometimes be problematic. One problem that can occur is if you draw a solid object made up of many triangles, the triangles drawn first can be drawn over by triangles drawn afterward. For example, let’s say you have an object such as a torus (donut shaped object) made up of many triangles. Some of those triangles are on the back side of the torus, and some on the front sides. You can’t see the back sides—at least you aren’t supposed to see the backsides (omitting for the moment the special case of transparent geometry). Depending on your orientation, the order in which the triangles are drawn may simply make a mess of things. One potential solution to this problem would be to sort the triangles and render the ones farther away first and then render the nearer triangles on top of them. This is called the painters algorithm and is very inefficient in computer graphics for two reasons. One is that you must write to every pixel twice wherever any geometry overlaps, and writing to memory slows things down. The second is that sorting individual triangles would be prohibitively expensive.
FRONT AND BACK FACE CULLING
One of the reasons OpenGL makes a distinction between the front and back sides of triangles is for culling. Back face culling can significantly improve performance and corrects problems.
This is very efficient, as a whole triangle is thrown away in the primitive assembly stage of rendering, and no wasteful or inappropriate rasterization is performed. General face culling is turned on like this: glEnable(GL_CULL_FACE); and turned back off with the counterpart: glDisable(GL_CULL_FACE); Note, we did not say whether to cull the front or back of anything. That is controlled by another function:
void glCullFace(GLenum mode);
Valid values for the mode parameter are GL_FRONT, GL_BACK, or GL_FRONT_AND_BACK. To throw away the insides of opaque (nontransparent) geometry takes two lines of code then.
Culling away the front of solid geometry is also useful in some circumstances, for example, showing a rendering of the insides of some figure. When rendering transparent objects (blending is coming up soon), we often render an object twice, once with transparency on, culling the front sides, and then again with the back sides turned off. This layers the object with the back side rendered before the front side, a requirement for rendering things transparently.
Another use for OpenGL’s blending capabilities is antialiasing. Under most circumstances, individual rendered fragments are mapped to individual pixels on a computer screen. These pixels are square (or squarish), and usually you can spot the division between two colors quite clearly. These jaggies, as they are often called, catch the eye’s attention and can destroy the illusion that the image is natural. These jaggies are a dead giveaway that the image is computer generated! For many rendering tasks, it is desirable to achieve as much realism as possible, particularly in games, simulations, or artistic endeavors.
To get rid of the jagged edges between primitives, OpenGL uses blending to blend the color of the fragment with the destination color of the pixel and its surrounding pixels. In essence, pixel colors are smeared slightly to neighboring pixels along the edges of any primitives. Turning on antialiasing is simple. First, you must enable blending and set the blending function to be the same as you used in the preceding section for transparency:
glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
You also need to make sure the blend equation is set to GL_ADD, but because this is the default and most common blending equation, we don’t show it here. After blending is enabled and the proper blending function and equation are selected, you can choose to antialias points, lines, and/or polygons (any solid primitive) by calling:
glEnable: glEnable(GL_POINT_SMOOTH); // Smooth out points glEnable(GL_LINE_SMOOTH); // Smooth out lines glEnable(GL_POLYGON_SMOOTH); // Smooth out polygon edges
You should use GL_POLYGON_SMOOTH with care. You might expect to smooth out edges on solid geometry, but there are other tedious rules to making this work. For example, geometry that overlaps requires a different blending mode, and you may need to sort your scene from front to back. We won’t go into the details because this method of solid object antialiasing has fallen out of common use and has largely been replaced by a superior route to smoothing edges on 3D geometry called multisampling. This feature is discussed in the next section. Without multisampling, you can still get this overlapping geometry problem with antialiased lines that overlap. For wireframe rendering, for example, you can usually get away with just disabling depth testing to avoid the depth artifacts at the line intersections.
One of the biggest advantages to antialiasing is that it smoothes out the edges of primitives and can lend a more natural and realistic appearance to renderings. Point and line smoothing is widely supported, but unfortunately polygon smoothing is not available on all platforms. Even when GL_POLYGON_SMOOTH is available, it is not as convenient a means of having your whole scene antialiased as you might think. Because it is based on the blending operation, you would need to sort all your primitives from front to back! Yuck. A more modern addition to OpenGL to address this shortcoming is multisampling. When this feature is supported (it is an OpenGL 1.3 or later feature), an additional buffer is added to the framebuffer that includes the color, depth, and stencil values. All primitives are sampled multiple times per pixel, and the results are stored in this buffer. These samples are resolved to a single value each time the pixel is updated, so from the programmer’s standpoint, it appears automatic and happens “behind the scenes.” Naturally, this extra memory and processing that must take place are not without their performance penalties, and some implementations may not support multisampling for multiple rendering contexts. To get multisampling, you must first obtain a rendering context that has support for a multisampled framebuffer. This varies from platform to platform, but GLUT exposes a bitfield (GLUT_MULTISAMPLE) that allows you to request this. For example, to request a multisampled, full-color, double-buffered framebuffer with depth, you would call:
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH | GLUT_MULTISAMPLE);
You can turn multisampling on and off using the glEnable/glDisable combination and the GL_MULTISAMPLE token:glEnable(GL_MULTISAMPLE); or glDisable(GL_MULTISAMPLE); Another important note about multisampling is that when it is enabled, the point, line, and polygon smoothing features via antialiasing are ignored if enabled. This means you cannot use point and line smoothing at the same time as multisampling. On a given OpenGL implementation, points and lines may look better with smoothing turned on instead of multisampling. To accommodate this, you might turn off multisampling before drawing points and lines and then turn on multisampling for other solid geometry. The following pseudocode shows a rough outline of how to do this:
glDisable(GL_MULTISAMPLE); glEnable(GL_POINT_SMOOTH); // ... Draw some smooth points glDisable(GL_POINT_SMOOTH); glEnable(GL_MULTISAMPLE);
Of course if you do not have a multisampled buffer to begin with, OpenGL behaves as if GL_MULTISAMPLE were disabled. The multisample buffers use the RGB values of fragments by default and do not include the alpha component of the colors. You can change this by calling glEnable with one of the following three values: GL_SAMPLE_ALPHA_TO_COVERAGE—Use the alpha value. GL_SAMPLE_ALPHA_TO_ONE—Set alpha to 1 and use it. GL_SAMPLE_COVERAGE—Use the value set with glSampleCoverage. When GL_SAMPLE_COVERAGE is enabled, the glSampleCoverage function allows you to specify a value that is ANDed (bitwise) with the fragment coverage value:
void glSampleCoverage(GLclampf value, GLboolean invert);
This fine-tuning of how the multisample operation works is not strictly specified by the specification, and the exact results may vary from implementation to implementation.
Turning different OpenGL features on and off changes the internal state of the driver. These state changes can be costly in terms of rendering performance. Frequently, performance-sensitive programmers go to great lengths to sort all the drawing commands so that geometry needing the same state is drawn together. This state sorting is one of the more common techniques to improve rendering speed in games.
Note: you can find detailed info from the opengl bluebook which because this is a distilled version of.