**Suppose we have a video monitor with a display area that measures 12 inches across and 9.6 inches high. If the resolution is 1280 by 1024 and the aspect ratio is 1, what is the diameter of each screen point?**

Wide:

Pixels 1280 (screen) = 12 inches (monitor)

High:

Pixels 1024 (screen) = 9.6 inches (monitor)

i.e.

When aspect ratio is 1

Width of 1 Pixel = = 0.009375

Height of 1 Pixel = = 0.009375

Answer: 0.009375 inches

**Suppose you want to draw a square using OpenGL. The coordinates of the square are (1,1), (1,2), (2,1), and (2,2). There are at least four different glBegin function invocations and the corresponding glVertex* function calls that will achieve the drawing of this square. One of them is this:**

**glBegin(GL_QUADS); glVertex2i(1,1); glVertex2i(1,2); glVertex2i(2,2); glVertex2i(2,1); glEnd();**

**Give three other code sequences (from glBegin through glEnd) that will draw the square. Which of these will draw a filled square?**

- glBegin(GL_POLYGON);

glVertex3f(1.0, 1.0, 0.0);

glVertex3f(1.0, 2.0, 0.0);

glVertex3f(2.0, 1.0, 0.0);

glVertex3f(2.0, 2.0, 0.0);

glEnd();

glFlush();

- glPushMatrix();

glTranslatef(xPoint,yPoint,0);

glBegin(GL_POLYGON);

glColor3f(theColor.blue, theColor.red, theColor.green);

glVertex2f(d1.x, d1.y);

glVertex2f(d2.x, d2.y);

glVertex2f(d3.x, d3.y);

glEnd();

glPopMatrix();

- glBegin(GL_POLYGON);

GLfloat vertices[] = {…};

GLubyte indices[] = {0,1,2, 3,4,5,

0,4,3, 5,5,0,

0,4,6, 7,3,0,

1,8,2, 3,4,0,

6,2,1, 5,3,4,

2,5,3, 3,2,2};

glDrawRangeElements(GL_TRIANGLES, 0, 5, 19, GL_UNSIGNED_BYTE, indices)

glEnd();

__Briefly__explain the relationship between the windowing system provided by something like Microsoft Windows or Apple OS X, the drawing produced by calls to the OpenGL library, and GLUT.

To understand the relation between windowing system or GUI of Windows OS and OpenGL it should be understood that OpenGL doesn’t runs of any GPU, it is basically an API and ultimately go to the graphic drivers to perform operations, translating OpenGL commands that are transferred to GPU into GPU command buffers are executed there. The underlying graphics system is GDI on MS Windows, which is similar in functionality of X11, though instead of talking through a serialized command protocol like X11 the programs are referring straight to that. OpenGL contexts are developed with an API that is graphics system specific which could be CGL on MacOS X, WGL on Windows or GLX on X11. An OpenGL context can be made current drawable that is either bypassing that directly to GFX drivers or a window of underlying GFX system.

**Explain the purpose of glutDisplayFunc and the function provided as its argument. Why is such a***callback*function required? In general, what is the purpose of a callback function (as provided by GLUT)?

For setting callback of display of current window glutDisplayFunc is used. When the Window’s normal plane is determined to be redisplayed by GLUT, callback display for that windows is called. The current window is set to window needing before the callback and if no overlay display callback is registered it is set to normal plan. Parameters are not needed to call display callback. In response to callback the whole plane area should be redisplayed including ancillary buffers if program needs it. Based on redisplay state GLUT controls when the display callback must be used.

**There are two basic OpenGL approaches to specifying the vertices associated with drawing objects: (1) explicitly giving their values as arguments to functions like glVertex2i and (2) specifying a collection of vertices in a vertex array. Briefly describe the difference between these approaches. Then indicate a potential benefit of using vertex arrays.**

When we give values explicitly as arguments it just send records, if there is no vertex reuse, then vertex array strategy will be quicker. Without vertex reuse, there is little need to trouble with indexing into an array. When we utilize vertex array, it diminishes both the quantity of capacity calls and the quantity of vertices to exchange. Besides, OpenGL might store the as of late prepared vertices and reuse them without resending the same vertices into vertex change pipeline numerous times.

**What happens when OpenGL draws two filled objects (like triangles) so that one overlaps the other? Explain how this behavior can be controlled.**

Collision Detection would be the finest option to control that behavior. To identify whether two filled objects collide, simply check whether the distance between their focuses is under two times the range. For instance to do a flawlessly elastic collision between the balls, we need to stress over the segment of the speed that is toward the collision. The other part which is tangent of collision will finish what has been started for both balls. We can get the collision segments by making a unit vector indicating in the heading from one filled object the other, then bringing the dab item with the speed vectors of those objects. We must part the time step in this way, that first you move to right place, then collide, then move for whatever remains of the time you have.

**Bresenham’s algorithm for drawing a line with a slope***m*having an absolute value less than 1

(that is, |*m*| < 1.0) is given in the text on pages 143-4 and is also displayed below. Assume we

want to modify the algorithm so it performs line stippling in the same way as glLineStipple. Modify the function so it accepts two additional parameters (repeatFactor and pattern, the same as those used with glLineStipple), and so it will display the appropriately-stippled line. You do not need to actually compile the code; just provide a display of the modified code.

#include <stdlib.h>

#include <math.h>

/* Bresenham line-drawing procedure for |m| < 1.0. */

void lineBres (int x0, int y0, int xEnd, int yEnd)

{

int dx = fabs (xEnd – x0), dy = fabs(yEnd – y0); int p = 2 * dy – dx;

int twoDy = 2 * dy, twoDyMinusDx = 2 * (dy – dx); int x, y;

/* Determine which endpoint to use as start position. */

if (x0 > xEnd) {

x = xEnd; y = yEnd;

xEnd = x0;

}

else {

x = x0; y = y0;

} setPixel (x, y);

while (x < xEnd) { x++;

if (p < 0) p += twoDy;

else { y++;

p += twoDyMinusDx;

}

setPixel (x, y);

}

}

void lineBres(int x0, int y0, int xEnd, int yEnd, int factor, int Pattern)

{

int temp, yStep, x, y;

int pattern = _Pattern;

int carry;

int factor = _factor;

boolean steep = (abs(yEnd – y0) > abs(xEnd – x0));

if (steep == true)

{

temp = x0;

x0 = y0;

y0 = temp;

temp = xEnd;

xEnd = yEnd;

yEnd = temp;

}

if (x0 > xEnd)

{

temp = x0;

x0 = xEnd;

xEnd = temp;

temp = y0;

y0 = yEnd;

yEnd = temp;

}

int deltaX = xEnd – x0;

int deltaY = abs(yEnd – y0);

int error = -(deltaX + 1) / 2;

y = y0;

if (y0 < yEnd)

{

yStep = 1;

}

else

{

yStep = -1;

}

for (x = x0; x <= xEnd; x++)

{

if ((pattern & 1) == 1)

{

if (steep == true)

{

point(y, x);

}

else

{

point(x, y);

}

carry = 0x8000;

}

else

{

carry = 0;

}

factor–;

if (factor <= 0)

{

pattern = (pattern >> 1) + carry;

factor = pattern;

}

error += deltaY;

if (error >= 0)

{

y += yStep;

error -= deltaX;

}

}

}

**What is meant by the term***anti-aliasing*? Briefly describe a few techniques that could be used to achieve this.

Anti-aliasing is a strategy that battle aliasing conduct to deliver more smooth edges. With pictures, approaches incorporate altering pixel positions or setting pixel intensities so that there is a slower move between the shade of a line and the background shading. Methods to accomplish anti-aliasing incorporate Prefiltering, Postfiltering or Supersampling. Prefiltering methods regard a pixel as an area, and process pixel shading in light of the cover of the scene’s objects with a pixel’s area. These procedures process the shades of dark in light of quantity of a pixel’s area is secured by an object. Supersampling is the procedure by which increasing so as to aliasing impacts in illustrations are diminished the frequency of the sampling grid and afterward averaging the outcomes down. This procedure implies computing a virtual picture at a higher spatial determination than the edge store determination and after that averaging down to the last determination. It is called postfiltering as the separating is completed in the wake of sampling

**What are the two major techniques used with OpenGL to specify the color of objects?**

glColorMaterial and glColor are used to specify color of objects in OpenGL

**What are the coordinates of a line from (1,1) to (3,3) after it has been rotated 45 degrees clockwise? What would the coordinates be if it had been rotated 90 degrees counterclockwise instead? In both cases, assume the rotation is about the origin.**

(1,1) (3,3) –> (0,1) (0,3)

(1,0) (0,3) –> (1,1) (3,3)

**Determine the coordinates of the line in the previous question, but assume the rotations are about the point (2,2).**

(2,2) –> (0,2)

(2,0) –> (2,2)

**Translation of an object by (∆x, ∆y) is relatively simple: add ∆x to the x component of each vertex, and add ∆y to the y component of each vertex. In general, this isn’t the way OpenGL (and other graphics libraries) usually implement translation. What would you usually do to implement translation? Consider only two-dimensional drawings.**

By using glPushMatrix()and glPopMatrix(), we can perform translation and rotation of a 2D object/drawing. Firstly we would call glPushMatrix(), then perform our translation, give coordinates of vertexes, translated item is then drawn and glPopMatrix is called in the end. The code would look something like this, here’s the sample:

glPushMatrix();

glTranslatef(xPoint,yPoint,0);

glBegin(GL_POLYGON);

glColor3f(theColor.blue, theColor.red, theColor.green);

glVertex2f(d1.x, d1.y);

glVertex2f(d2.x, d2.y);

glVertex2f(d3.x, d3.y);

glEnd();

glPopMatrix();

**Explain the essential differences between a clipping window and a viewport.**

At the point when a window is “put” on the world, just certain objects and parts of objects are viewed. Focuses and lines which are outside the window are “cut off” from perspective. This procedure of “cutting off” parts of the picture of the world is called Clipping. A Window is a rectangular district on the planet coordinate system. This is the coordinate system is utilized to find an object in the regular world. The world coordinate system needs no presentation gadget, so the units of measure can be certain, negative or decimal numbers. A Viewport is the segment of the screen where the pictures enveloped by the window on the world coordinate system will be drawn. A coordinate change is required to display the picture, enveloped by the window, in the viewport. The viewport utilizes the screen coordinate system so this change is from the world coordinate system to the screen coordinate system.

**Consider the three clipping algorithms (for straight lines and a rectangular clipping window) Cohen-Sutherland, Liang-Barsky, and Nicholl-Lee-Nicholl. These algorithms are all similar in one respect, and they also differ from each other in one significant way. What is the similarity, and what is the difference.**

At the point when the line endpoints be at the corner, our algorithm might perform better. All in all the introduced algorithm just by computing at most two conversion focuses for the line in each condition can toss quicker. This algorithm is developed for the clipping windows which regardless of the possibility that they are not square. NLN strategy utilizes more districts testing as a part of the xy plane to diminish intersection figuring.

The additional intersection count disposed of in the NLN algorithm via doing more locale testing before intersection position is figured. Liang-bersky utilizes a parametric line representation to set up a more effective line clipping technique that lessens intersection estimations. By and large, the LB algorithm is more productive than the CSL algorithm since intersection computation is diminished.

CSL algorithm supposedly figures intersections along a line way, the line might be totally outside the clip window and every estimation requires both a division and an augmentation. The algorithm isolates the 2D space into 9 sections, utilizing the unbounded expansion of four linear limits of the window. Appoint a bit pattern to every area.

**Consider the rectangular clipping window with (1,1) as its lower left corner and (3,3) as its upper right corner. What are the coordinates of the line from (0.5,1.5) to (2,4) when it is clipped? What are the coordinates of the line from (1.5,3.5) to (4,0) when it is clipped?**

(1,1) (3,3) –> (-1,1) (3,-3)

(0.5, 1.5) (2,4) –> (1.5, 2) (1.5,2)

(1.5, 3.5) (4,0) –> (1, 3)(2,0)

**The Sutherland-Hodgman polygon-clipping algorithm works in stages. What is the difference between the stages? What is a potentially significant advantage of this algorithm over one that does not work in stages?**

In different stages of S-H Polygon-Clipping end-points sets are check for trifling acknowledgment or unimportant rejected utilizing the result. If not paltry acknowledgment or minor rejected, isolated a clip edge into two segments. Iteratively clipped by testing trifling acknowledgment or minor rejected, and partitioned into two segments until totally inside or inconsequential rejected. Polygons are clipped beside all edges of the window each one in turn. Windows/edge intersections, if any, are anything but difficult to discover subsequent to the X or Y coordinates are as of now known. Vertices which are kept in the wake of clipping against one window edge are put something aside to clip against the remaining edges. The main advantage here is that It utilizes the Divide and Conquer approach which makes this algorithm effective when outcode testing should be possible economically and unimportant acknowledgment or dismissal is pertinent to the dominant part of segment line.

**What makes line-clipping algorithms (like Cohen-Sutherland), by themselves, generally unsuitable for clipping the boundary lines, and thus the areas, of filled regions?**

One of the issues normal to both the Cohen-Sutherland and the Liang-Barsky algorithm is that a larger number of intersections are figured than should be expected. For instance, clipping line fragment [P1, P2] against the window. The Cohen-Sutherland algorithm will figure the intersection of the portion with the top limit at t4 despite the fact that the section is later rejected. The Liang-Barsky algorithm will really process all the parameter values comparing to the intersection of the line with the window. Staying away from a hefty portion of these squandered calculations is the thing that the Nicholl-Lee-Nicholl line-clipping algorithm is about, these creators likewise make a nitty gritty examination of the insufficiencies of the Cohen-Sutherland and Liang-Barsky algorithms. Their last algorithm is much quicker than both of these.

**Text clipping can be done using three obvious methods. What are these?**

STARDUST: This technique for character era is called stardust strategy because of its characteristics appearance. A code change programming is utilized to show a character from its 254 bits code and the character quality is additionally poor. It is most exceedingly terrible for bend shape character.

STROKE: This technique utilizes little line segments to produce a character we can fabricated our own stroke strategy character create by call through the line drawing algorithm. In this strategy a settled example of line segments are utilized to create the characters. There are 24 line segments. Out of these 24 line segments the segments required to show for a specific character are highlighted.

BITMAP: It is otherwise called the dot matrix on the grounds that in this technique character will be character are spoken to by cluster of dots in the matrix structure. It is 2D exhibit that implies having columns & rows. Every dot in the matrix is pixel. The estimation of pixel control the power of the pixel.

**Assume we are clipping the components of individual characters. That is, if a character’s boundary intersects with the clipping region boundary, then part of the character will be clipped and not be included in the clipping window. Which character type, outline or bitmapped, is likely to be more difficult to clip?**

Clipping outline fonts can be changed into bitmap textual styles already if important. The change is significantly harder, since bitmap textual styles requires heuristic calculation and approximate the relating points if the pixels don’t make a straight line.