Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 77

Computer Graphics and

Visualization

1
Computer Graphics

Computer graphics deals with all aspects


of creating images with a computer
- Hardware
- Software
- Applications

2
Example

Where did this image come from?

What hardware/software did we need to


produce it?
3
Preliminary Answer

Application: The object is an artists


rendition of the sun for an animation
(planetarium)
Software: Maya for modeling and
rendering but Maya is built on top of
OpenGL
Hardware: PC with graphics card for
modeling and rendering

4
Applications of CG

Four Major Areas


1. Display of Information
2. Design
3. Simulation and Animation
4. User Interface

5
1. Display of Information

6
7
8
2. Design

9
10
3. Simulation and Animation

11
12
4. User Interface

13
A Graphics System

It is a computer system hence, it must


have all the components of a general-
purpose computer system.
5 Major elements are :
1. Input devices
2. Processor
3. Memory
4. Frame buffer
5. Output devices
14
Output device

Input devices
Image formed in FB

15
16
The Frame Buffer

Phosphors are logically arranged into a


2D array of pixture elements (pixels)
The Frame Buffer stores the array of pixels
that are to be activated when the display is
refreshed
Resolution of Frame Buffer denotes the
size of the array: e.g. 1280 x 1024

17
The image we see on the output device is an
arraythe rasterof picture elements, or
pixels, produced by the graphics system.
Pixels are stored in a part of memory called
the frame buffer.
The frame buffer can be viewed as the core
element of a graphics system.
Resolution is the number of pixels in the frame
buffer determines the detail that you can see
in the image.
18
The depth, or precision, of frame buffer is
defined as the number of bits that are used for
each pixel.
It determines properties such as how many
colors can be represented on a given system.
For example, a 1-bit-deep frame buffer allows
only two colors.
8-bit-deep frame buffer allows 28 (256) colors.
In full-color systems, there are 24 (or more)
bits per pixel (RGB).
19
A triangle is specified by its three vertices, but
to display its outline by the three line segments
connecting the vertices.
The graphics system must generate a set of
pixels that appear as line segments to the
viewer.
The conversion of geometric entities to pixel
colors and locations in the frame buffer is
known as rasterization, or scan conversion

20
Graphics Processing Units (GPUs), custom-
tailored to carry out specific graphics
functions.
The GPU can be either on the mother board of
the system or on a graphics card.
The frame buffer is accessed through the
graphics processing unit and usually is on the
same circuit board as the GPU.
GPUs are so powerful that they can often be
used as mini supercomputers for computing.
21
Monochrome System

Frame Buffer Display


000000000000000000000000
000000000000000000000000
000000000000000000000000
000000000000000000000000
000011000011000000000000
000011000011000000000000
000000000000000000000000
000000000000000000000000
001000000000001000000000
000100000000010000000000
000010000000100000000000
000001111111000000000000
000000000000000000000000
000000000000000000000000
000000000000000000000000
000000000000000000000000
000000000000000000000000

22
CRT

Can be used either as a line-drawing


device (calligraphic) or to display contents
of frame buffer (raster mode)
23
Cathode Ray Tube (CRT)

Phosphor
Field

24
Cathode Ray Tube (CRT)

Deflectors Phosphor
Field

25
Phosphors Decay

Light coming from a single phosphor does not


last long.
Human eye can detect a difference within 20
milliseconds

To appear without flicker, the entire screen should


be redrawn (Refreshed) at least 50 times per
second (50 Hz)

26
Interlacing

Because phosphors are small, it is difficult for


the eye to distinguish from one phosphor to
the next.

Interlacing To refresh every other row of the


display within 50Hz.
In a non-interlaced system, the pixels are is
played row by row, at the refresh rate.
In an interlaced display, odd rows and even rows
are refreshed alternately.
27
28
Color CRT

Triad

Deflectors

Phosphor Field 29
Color Displays

Color code is stored in the frame buffer


Display translates color code into
intensities for three electron guns.
Each gun fires into an appropriately
colored phosphor in a group of three
(known as a triad).
Triad = Pixel

30
Shadow Mask CRT

31
LEDs , liquid-crystal displays (LCDs), and plasma
panels, all use a two-dimensional grid to address
individual light-emitting elements.
The two outside plates each contain parallel grids of
wires that are oriented perpendicular to each other.
Electrical signals to the proper wires in each grid. the
electrical field at a location, determined by the
intersection of two wires.
32
The middle plate in an LED panel contains light-
emitting diodes that can be turned on and off by
the electrical signals sent to the grid.
In an LCD display, the electrical field controls the
polarization of the liquid crystals in the middle
panel.
A plasma panel uses the voltages on the grids to
energize gases embedded between the glass
panels holding the grids. The energized gas
becomes a glowing plasma.
33
Input Devices

34
Image Formation

In computer graphics, we form images


which are generally two dimensional using
a process analogous to how images are
formed by physical imaging systems
- Cameras
- Microscopes
- Telescopes
- Human visual system

35
In modern systems can exploit the capabilities
of the s/w and h/w to create realistic images of
computer-generated 3D objects
This task involves many aspects of image
formation, such as lighting, shading, and
properties of materials
Computer-generated images are synthetic
Traditional imaging methods include cameras
and the human visual system.

36
Elements of Image Formation

Objects
Viewer
Light source(s)

Attributes that govern how light interacts with the


materials in the scene
Note the independence of the objects, the viewer, and
the light source(s)
The object exists in space independent of any image-
formation process and of any viewer.
37
Light

Light is the part of the electromagnetic spectrum


that causes a reaction in our visual systems
Generally these are wavelengths in the range of
about 350-750 nm (nanometers)
Long wavelengths appear as reds and short
wavelengths as blues
38
Ray Tracing and
Geometric Optics
Follow rays of light from a point
source finding which rays enter the
lens of the camera.
Each ray of light may have multiple
interactions with objects before
being absorbed or going to infinity.
Ray tracing and photon mapping are image-formation
techniques that are based on these ideas and that can form
the basis for producing computer-generated images.
Other approaches is based on conservation of energy,
radiosity requires more computation to be done in real
time.
39
Pinhole Camera

Use trigonometry to find projection of point at (x,y,z)

xp= -x/z/d yp= -y/z/d zp = d

These are equations of simple perspective

40
The point (xp, yp, d) is called the projection of the point
(x, y, z).
The field, or angle of view of our camera is the angle made
by the largest object that our camera can image on its film
plane.

If h is the height of camera,


the angle of view is
= 2 tan1 (h/2d)

41
The ideal pinhole camera has an infinite depth of
field: Every point within its field of view is in focus.
Every point in its field of view projects to a point
on the back of the camera.
The pinhole camera has two disadvantages.
- First, because the pinhole is so small - it admits only a
single ray from a point source - almost no light enters the
camera.
- Second, the camera cannot be adjusted to have a
different angle of view.

42
By replacing the pinhole with a lens, we solve the two
problems of the pinhole camera.
First, the lens gathers more light than can pass through
the pinhole. The larger the aperture of the lens, the more
light the lens can collect.
Second, by picking a lens with the proper focal length,
equivalent to choosing d for the pinhole camerawe can
achieve any desired angle of view (up to 180 degrees).
Lenses, however, do not have an infinite depth of field: Not
all distances from the lens are in focus.
Like the pinhole camera, computer graphics produces
images in which all objects are in focus.
43
Human Visual System

Light enters the eye through the lens


and cornea.
The iris opens and closes to adjust the
amount of light entering eye.
The lens forms an image on 2D structure
called the retina
The rods and cones are light sensors, located on the retina.
They are excited by electromagnetic energy in the range of 350
to 780 nm.
The rods are account for night vision and are not color sensitive,
cones are responsible for color vision.
The final processing is done in a part of brain called visual
cortex, where functions such as object recognition is done.
44
Synthetic Camera Model

projector

p
image plane
projection of p
center of projection

45
The image is formed on the
film plane at the back of the
camera.
The specification of the objects
is independent of the
specification of the viewer.
Hence, within a graphics
library, there will be separate
functions for specifying the
objects and the viewer.

46
47
48
The Programmers Interface

Programmer sees the graphics system


through a software interface: the
Application Programmer Interface (API)

49
The Pen-Plotter Model
A pen plotter produces images by
moving a pen held by a gantry, a
structure that can move the pen in
two orthogonal directions across the
paper.
moveto(x,y);
lineto(x,y);
moveto(0,0);
lineto(1,0);
lineto(1,1);
lineto(0,1);
lineto(0,0);
50
A Raster-based, but still limiting, 2D model relies on
writing pixels directly into a frame buffer.
write_pixel(x, y, color);

51
Three-Dimensional APIs

Functions that specify what we need to


form an image
- Objects
- Viewer
- Light Source(s)
- Materials
Other information
- Input from devices such as mouse and keyboard
- Capabilities of system

52
Advantages

Separation of objects, viewer, light sources


Two-dimensional graphics is a special case
of three-dimensional graphics
Leads to simple software API
- Specify objects, lights, camera, attributes
- Let implementation determine image
Leads to fast hardware implementation

53
Object Specification

Most APIs support a limited set of


primitives including
- Points (0D object)
- Line segments (1D objects)
- Polygons (2D objects)
- Some curves and surfaces
Quadrics
Parametric polynomials
All are defined through locations in space
or vertices
54
Four types of necessary specifications:
1.Position : The camera location usually is given by the
position of the center of the lens, (COP).
2.Orientation : Once we have positioned the camera, we can
place a camera coordinate system with its origin at the
center of projection. We can then rotate the camera
independently around the three axes of this system.
3.Focal length : The focal length of the lens determines the
size of the image on the film plane.
4.Film plane : The back of the camera has a height and a
width.
55
Example

type of object
location of vertex
glBegin(GL_POLYGON)
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.0, 1.0, 0.0);
glVertex3f(0.0, 0.0, 1.0);
glEnd( );

end of object definition

56
Camera Specification

Six degrees of freedom


- Position of center of lens
- Orientation
Lens
Film size
Orientation of film plane

57
Lights and Materials

Types of lights
- Point sources vs distributed sources
- Spot lights
- Near and far sources
- Color properties
Material properties
- Absorption: color properties
- Scattering
Diffuse
Specular

58
Additive and Subtractive Color

Additive color
- Form a color by adding amounts of three
primaries
CRTs, projection systems, positive film
- Primaries are Red (R), Green (G), Blue (B)
Subtractive color
- Form a color by filtering white light with cyan
(C), Magenta (M), and Yellow (Y) filters
Light-material interactions
Printing
Negative film

59
Global vs Local Lighting

Cannot compute color or shade of each


object independently
- Some objects are blocked from light
- Light can reflect from object to object
- Some objects might be translucent

60
Why not ray tracing?

Ray tracing seems more physically based


so why dont we use it to design a
graphics system?
Possible and is actually simple for simple
objects such as polygons and quadrics
with simple point sources
In principle, can produce global lighting
effects such as shadows and multiple
reflections but ray tracing is slow and not
well-suited for interactive applications
61
The ModelingRendering Paradigm

The interface between the modeler and renderer can be as


simple as a file produced by the modeler that describes the
objects and that contains additional information important
only to the renderer, such as light sources, viewer location,
and material properties.

62
GRAPHICS ARCHITECTURES
Early graphics systems used general-purpose computers
with the standard von Neumann architecture. Such
computers are characterized by a single processing unit
that processes a single instruction at a time.
The display in these systems was based on a calligraphic
CRT display that included the necessary circuitry to
generate a line segment connecting two points.

63
Display Processor
Rather than have the host computer try to refresh display,
use a special purpose computer called a display processor
(DPU)

Graphics stored in display list (display file) on display


processor
Host compiles display list and sends to DPU
The program in the display list is executed repetitively, at a
rate sufficient to avoid flicker, independently of the host,
thus freeing the host for other tasks.
64
Pipeline Architectures
If we use this configuration to compute a + (b c), the
calculation takes one multiplication and one additionthe
same amount of work required if we use a single processor
to carry out both operations.
Rate at which data flows through the system, the
throughput of the system can be increased. The time for a
single datum to pass through the system is called latency.

65
The Graphics Pipeline
Process objects one at a time in the order they are
generated by the application
There is no point in building a pipeline unless we will
do the same operation on many data sets.
In computer graphics, large sets of vertices and
pixels must be processed in the same manner.
The four major steps in the imaging process:
1. Vertex processing
2. Clipping and primitive assembly
3. Rasterization
4. Fragment processing
66
1. Vertex Processing
Each vertex is processed independently.
The two major functions of this block are to carry out
coordinate transformations and to compute a color for
each vertex.
Much of the work in the pipeline is in converting object
representations from one coordinate system to another
- Object coordinates
- Camera (eye) coordinates
- Screen coordinates

67
Successive changes in coordinate systems by multiplying,
or concatenating, the individual matrices into a single
matrix.
Vertex processor also computes vertex colors
After multiple stages of transformation, the geometry is
transformed by a projection transformation
The assignment of vertex colors can be as simple as the
program specifying a color or as complex as the
computation of a color from a physically realistic lighting
model that incorporates the surface properties of the object
and the characteristic light sources in the scene

68
Projection

Projection is the process that combines


the 3D viewer with the 3D objects to
produce the 2D image
- Perspective projections: all projectors meet at
the center of projection
- Parallel projection: projectors are parallel,
center of projection is replaced by a direction of
projection

69
2. Primitive Assembly
No imaging system can see the whole world at once
The human retina has a limited size corresponding to an
approximately 90-degree field of view.
Cameras have film of limited size, and can adjust their
fields of view by selecting different lenses.
Vertices must be collected into geometric objects before
clipping and rasterization can take place Line segments
Polygons Curves and surfaces

70
2. Clipping
Just as a real camera cannot see the whole world, the
virtual camera can only see part of the world or object space
- Objects that are not within this volume are said to be
clipped out of the scene

71
Clipping must be done on a primitive-by-primitive basis
rather than on a vertex by vertex basis.
Thus, within this stage of the pipeline, one must assemble
sets of vertices into primitives, such as line segments and
polygons, before clipping can take place.
Consequently, the output of this stage is a set of primitives
whose projections can appear in the image.

72
3. Rasterization

If an object is not clipped out, the appropriate pixels


in the frame buffer must be assigned colors
Rasterizer produces a set of fragments for each
object
Fragments are potential pixels
- Have a location in frame bufffer
- Color and depth attributes
Vertex attributes are interpolated over objects by the
rasterizer

73
The primitives that emerge from the clipper are still
represented in terms of their vertices and must be
converted to pixels in the frame buffer.
For example, if three vertices specify a triangle with a solid
color, the rasterizer must determine which pixels in the
frame buffer are inside the polygon.
The output of the rasterizer is a set of fragments for each
primitive.
A fragment can be thought of as a potential pixel that
carries with it information, including its color and location,
that is used to update the corresponding pixel in the frame
buffer.
Fragments can also carry along depth information. 74
4. Fragment Processing

Fragments are processed to determine the color of the


corresponding pixel in the frame buffer
The final block in our pipeline takes in the fragments
generated by the rasterizer and updates the pixels in the
frame buffer. If the application generated 3d data, some
fragments may not be visible because the surfaces that they
define are behind other surfaces.
Colors can be determined by texture mapping or interpolation
of vertex colors
Fragments may be blocked by other fragments closer to the
camera, Hidden-surface removal

75
Programmable Pipelines
Ray tracing, radiosity, photon mappingcan achieve real-
time behavior, the ability to render complex dynamic
scenes so that the viewer sees the display without defects.
Vertex programs can alter the location or color of each
vertex as it flows through the pipeline.
Programmability is now available at every level, including
hand-held devices such as cell phones. WebGL is being
built into Web browsers

76
Performance Characteristics
There are two fundamentally different types of processing
At the front end, there is geometric processing, At the back
end, rasterization
Pipeline architectures dominate the graphics field, specially
where real-time performance is of importance.
Commodity graphics cards incorporate the pipeline within
their GPUs. Cards that cost less than $100 can render
millions of shaded texture-mapped polygons per second.
Graphics cards use GPUs that contain the entire pipeline
within a single chip. The latest cards implement the entire
pipeline using floating point arithmetic and have floating-
point frame buffers.
77

You might also like