It6005 Digital Image Processing - QB

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 21

1

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

QUESTION BANK

IV YEAR 7TH SEMESTER

BATCH 2013-2017
IT6005 DIGITAL IMAGE PROCESSING SYLLABUS

UNIT               I                      DIGITAL IMAGE FUNDAMENTALS 8

Introduction – Origin – Steps in Digital Image Processing – Components – Elements of Visual


Perception – Image Sensing and Acquisition – Image Sampling and Quantization – Relationships
between pixels – color models.

UNIT               II                     IMAGE ENHANCEMENT 10

Spatial Domain: Gray level transformations – Histogram processing – Basics of Spatial


Filtering–Smoothing and Sharpening Spatial Filtering – Frequency Domain: Introduction to
Fourier Transform – Smoothing and Sharpening frequency domain filters – Ideal, Butterworth
and Gaussian filters.

UNIT               III                     IMAGE RESTORATION AND SEGMENTATION 9

Noise models – Mean Filters – Order Statistics – Adaptive filters – Band reject Filters – Band
pass Filters – Notch Filters – Optimum Notch Filtering – Inverse Filtering – Wiener filtering
Segmentation: Detection of Discontinuities–Edge Linking and Boundary detection – Region
based segmentation- Morphological processing- erosion and dilation.

UNIT               IV                    WAVELETS AND IMAGE COMPRESSION 9

 Wavelets – Subband coding – Multiresolution expansions – Compression: Fundamentals –


Image Compression models – Error Free Compression – Variable Length Coding – Bit-Plane
Coding – Lossless Predictive Coding – Lossy Compression – Lossy Predictive Coding –
Compression Standards.

 UNIT              V                     IMAGE REPRESENTATION AND RECOGNITION 9

 Boundary representation – Chain Code – Polygonal approximation, signature, boundary


segments – Boundary description – Shape number – Fourier Descriptor, moments- Regional
Descriptors –Topological feature, Texture – Patterns and Pattern classes – Recognition based on
matching.
2

TEXT BOOK:

1. Rafael C. Gonzales, Richard E. Woods, “Digital Image Processing”, Third Edition,


Pearson Education, 2010.

REFERENCES:

1. Rafael C. Gonzalez, Richard E. Woods, Steven L. Eddins, “Digital Image Processing Using
MATLAB”, Third Edition Tata Mc Graw Hill Pvt. Ltd., 2011.
2. Anil Jain K. “Fundamentals of Digital Image Processing”, PHI Learning Pvt. Ltd., 2011.
3. Willliam K Pratt, “Digital Image Processing”, John Willey, 2002.
4. Malay K. Pakhira, “Digital Image Processing and Pattern Recognition”, First Edition, PHI
Learning Pvt. Ltd., 2011.
5. http://eeweb.poly.edu/~onur/lectures/lectures.html.
6. http://www.caen.uiowa.edu/~dip/LECTURE/lecture.html

UNIT-I DIGITAL IMAGE FUNDAMENTALS

PART -A

Define Image?
An image may be defined as two dimensional light intensity function f(x, y) where x
and y denote spatial co-ordinate and the amplitude or value of f at any point (x, y) is called
intensity or grayscale or brightness of the image at that point.
1. What is Dynamic Range?
The range of values spanned by the gray scale is called dynamic range of an image.
Image will have high contrast, if the dynamic range is high and image will have dull washed
out gray look if the dynamic range is low.
2. Define Brightness?
Brightness of an object is the perceived luminance of the surround. Two objects with
different surroundings would have identical luminance but different brightness.
3. Define Tapered Quantization?
If gray levels in a certain range occur frequently while others occurs rarely, the
quantization levels are finely spaced in this range and coarsely spaced outside of it. This
method is sometimes called Tapered Quantization.
4. What do you meant by Gray level?
Gray level refers to a scalar measure of intensity that ranges from black to grays and
finally to white.
5. What do you meant by Color model?
A Color model is a specification of 3D-coordinates system and a subspace within that
system where each color is represented by a single point.
6. List the hardware oriented color models?
1. RGB model
2. CMY model
3. YIQ model
4. HSI model
7. What is Hue of saturation?
3

Hue is a color attribute that describes a pure color where saturation gives a measure of
the degree to which a pure color is diluted by white light.
8. List the applications of color models?
1. RGB model--- used for color monitor & color video camera
2. CMY model---used for color printing
3. HIS model----used for color image processing
4. YIQ model---used for color picture transmission

9. What is Chromatic Adoption?


The hue of a perceived color depends on the adoption of the viewer. For example, the
American Flag will not immediately appear red, white, and blue of the viewer has been
subjected to high intensity red light before viewing the flag. The color of the flag will appear
to shift in hue toward the red component cyan.
10. Define Resolutions?
Resolution is defined as the smallest number of discernible detail in an image. Spatial
resolution is the smallest discernible detail in an image and gray level resolution refers to the
smallest discernible change is gray level.
11. What is meant by pixel?
A digital image is composed of a finite number of elements each of which has a
particular location or value. These elements are referred to as pixels or image elements or
picture elements or pels elements.
12. Define Digital image?
When x, y and the amplitude values of f all are finite discrete quantities, we call the image
digital image.
13. What are the steps involved in DIP?
1. Image Acquisition
2. Preprocessing
3. Segmentation
4. Representation and Description
5. Recognition and Interpretation
14. What is recognition and Interpretation?
Recognition means is a process that assigns a label to an object based on the
information provided by its descriptors. Interpretation means assigning meaning to a
recognized object.
15. Specify the elements of DIP system?
1. Image Acquisition
2. Storage
3. Processing
4. Display
16. Explain the categories of digital storage?
1. Short term storage for use during processing.
2. Online storage for relatively fast recall.
3. Archival storage for infrequent access.
18. What are the types of light receptors?
The two types of light receptors are
1. Cones and
2. Rods
4

19. Differentiate photopic and scotopic vision?


Photopic vision
The human being can resolve the fine details with these cones because each one is connected
to its own nerve end.
Scotopic vision
This is also known as bright light vision. Several rods are connected to one nerve end. So it
gives the overall picture of the image. This is also known as thin light vision.

20. How cones and rods are distributed in retina?


In each eye, cones are in the range 6-7 million and rods are in the range 75-150 million.
21. Define subjective brightness and brightness adaptation?
Subjective brightness means intensity as preserved by the human visual system.
Brightness adaptation means the human visual system can operate only from scotopic to glare
limit. It cannot operate over the range simultaneously. It accomplishes this large variation by
changes in its overall intensity.

22. Define weber ratio


The ratio of increment of illumination to background of illumination is called as weber
ratio. (ie) _∆IC/I
If the ratio (∆IC/I) is small, then small percentage of change in intensity is needed (ie)
good brightness adaptation.
If the ratio (∆IC/I) is large, then large percentage of change in intensity is needed (ie)
poor brightness adaptation.
23. What is meant by mach band effect?
Mach band effect means the intensity of the stripes is constant. Therefore it preserves
the brightness pattern near the boundaries; these bands are called as mach band effect.
24. What is simultaneous contrast?
The region reserved brightness not depends on its intensity but also on its background.
All centres square have same intensity. However they appear to the eye to become darker as
the background becomes lighter.
25. What is meant by illumination and reflectance?
Illumination is the amount of source light incident on the scene. It is represented as i(x, y).
Reflectance is the amount of light reflected by the object in the scene. It is represented by r(x,
y).
26. Define sampling and quantization?
Sampling means digitizing the co-ordinate value (x, y).Quantization means digitizing
the amplitude value.
27. Find the number of bits required to store a 256 X 256 image with 32 gray levels?
32 gray levels = 25 = 5 bits 256 * 256 * 5 = 327680 bits.
28. Write the expression to find the number of bits to store a digital image?
The number of bits required to store a digital image is b=M X N X k, When M=N, this
equation becomes b=N^2k
29. What do you meant by Zooming of digital images?
Zooming may be viewed as over sampling. It involves the creation of new pixel
locations and the assignment of gray levels to those new locations.
30. What do you meant by shrinking of digital images?
Shrinking may be viewed as under sampling. To shrink an image by one half, we delete
5

every row and column. To reduce possible aliasing effect, it is a good idea to blue an image
slightly before shrinking it.
31. Write short notes on neighbors of a pixel.
The pixel p at co-ordinates (x, y) has 4 neighbors (ie) 2 horizontal and 2 vertical
neighbors whose co-ordinates is given by (x+1, y), (x-1,y), (x,y-1), (x, y+1). This is called as
direct neighbors. It is denoted by N4(P) Four diagonal neighbors of p have co-ordinates (x+1,
y+1), (x+1,y-1), (x-1, y-1), (x-1, y+1). It is denoted by ND (4). Eight neighbors of p denoted
by N8 (P) is a combination of 4 direct neighbors and 4 diagonal neighbors.
32. Explain the types of connectivity.
• 4 connectivity
• 8 connectivity
• M connectivity (mixed connectivity)

33. What is meant by path?


Path from pixel p with co-ordinates (x, y) to pixel q with co-ordinates (s,t) is a
sequence of distinct pixels with co-ordinates.
34. Give the formula for calculating D4 and D8 distance.
D4 distance (city block distance) is defined by D4(p, q) = |x-s| + |y-t| D8 distance(chess board
distance) is defined by D8(p, q) = max(|x-s|, |y-t|).
35. What is geometric transformation?
Transformation is used to alter the co-ordinate description of image.
The basic geometric transformations are
• Image translation
• Scaling
• Image rotation

36. What is image translation and scaling?


 Image translation means reposition the image from one co-ordinate location to another
along straight line path.
 Scaling is used to alter the size of the object or image (ie) a co-ordinate system is scaled
by a factor.

PART-B

Explain various functional block of digital image processing.


Refer page no 3 in Digital Image processing by Rafael C. Gonzalez
Briefly explain the elements of human visual system
Refer page no 36 in Digital Image processing by Rafael C. Gonzalez
1. Describe image formation in the eye with brightness adaptation and
discrimination.
Refer page no 38 in Digital Image processing by Rafael C. Gonzalez
Explain sampling and quantization.
Refer page no 52 in Digital Image processing by Rafael C. Gonzalez
2. Explain CMY model & explain the RGB model.
Refer page no 401 in Digital Image processing by Rafael C. Gonzalez
3. Explain in detail about basic relationship between Pixels.
Refer page no 68 in Digital Image processing by Rafael C. Gonzalez
6

UNIT -II IMAGE ENHANCEMENT

PART A

Specify the objective of image enhancement technique.


The objective of enhancement technique is to process an image so that the result is more
suitable than the original image for a particular application.
Explain the 2 categories of image enhancement.
i) Spatial domain refers to image plane itself & approaches in this category are based
on direct manipulation of picture image.
ii) Frequency domain methods based on modifying the image by Fourier transform.
What is contrast stretching?
Contrast stretching reduces an image of higher contrast than the original by darkening
the levels below m and brightening the levels above m in the image.
What is grey level slicing?
Highlighting a specific range of grey levels in an image often is desired. Applications
include enhancing features such as masses of water in satellite imagery and enhancing flaws in
x-ray images.
Define image subtraction.
The difference between 2 images f(x,y) and h(x,y) expressed as, g(x,y)=f(x,y)-h(x,y) is
obtained by computing the difference between all pairs of corresponding pixels from f and h.
What is the purpose of image averaging?
An important application of image averaging’s in the field of astronomy, where imaging
with very low light levels is routine, causing sensor noise frequently to render single images
virtually useless for analysis.
What is meant by masking?
Mask is the small 2-D array in which the values of mask co-efficient determine the
nature of process. The enhancement technique based on this type of approach is referred to as
mask processing.
Give the formula for negative and log transformation.
Negative: S=L-1-r Log: S = c log (1+r) Where c-constant and r 0
What is meant by bit plane slicing?
Instead of highlighting gray level ranges, highlighting the contribution made to total
image appearance by specific bits might be desired. Suppose that each pixel in an image is
represented by 8 bits. Imagine that the image is composed of eight 1-bit planes, ranging from
bit plane 0 for LSB to bit plane-7 for MSB.
Define histogram.
The histogram of a digital image with gray levels in the range [0, L-1] is a discrete
function h(rk)=nk. rk-kth gray level nk-number of pixels in the image having gray level rk.
What is meant by histogram equalization?
kk
7

Sk= T(rk) = _ Pr(rj) = _ nj/n where k=0,1,2,


….L-1 j=0 j=0
This transformation is called histogram equalization.
12. What is the need for transform?

The need for transform is most of the signals or images are time domain signal
(ie) signals can be measured with a function of time. This representation is not
always best. For most image processing applications anyone of the mathematical
transformation are applied to the signal or images to obtain further information
from that signal.

13. Define the term Luminance?

Luminance measured in lumens (lm), gives a measure of the amount of energy an


observer perceiver from a light source.

14. What is Image Transform?

An image can be expanded in terms of a discrete set of basis arrays called basis
images. These basis images can be generated by unitary matrices. Alternatively, a
given NxN image can be viewed as an N^2x1 vectors. An image transform
provides a set of coordinates or basis vectors for vector space.

15. What are the applications of transform?

1) To reduce band width

2) To reduce redundancy

3) To extract feature.

16. Give the Conditions for perfect transform?

Transpose of matrix = Inverse of a matrix. Orthoganality.

17. What are the properties of unitary transform?

1) Determinant and the Eigen values of a unitary matrix have unity magnitude

2) the entropy of a random vector is preserved under a unitary Transformation

3) Since the entropy is a measure of average information, this means


information is preserved under a unitary transformation.

18. Write the steps involved in frequency domain filtering.


8

1. x+y1. Multiply the input image by (-1) to center the transform.

2. Compute F(u,v), the DFT of the image from (1).

3. Multiply F(u,v) by a filter function H(u,v).

4. Compute the inverse DFT of the result in (3).

5. Obtain the real part of the result in (4). x+y

6. Multiply the result in (5) by (-1)

19. What do you mean by Point processing?


Image enhancement at any Point in an image depends only on the gray level at that
point is often referred to as Point processing.
20. Define Derivative filter?
For a function f (x, y), the gradient f at co-ordinate (x, y) is defined as the vector
∆f = ∂f/∂x
∂f/∂y
∆f = mag (∆f) = {[(∂f/∂x) 2 +(∂f/∂y) 2 ]} 1/2
21. Specify the properties of 2D Fourier transform.
The properties are
 Separability
 Translation
 Periodicity and conjugate symmetry
 Rotation
 Distributivity and scaling
 Average value
 Laplacian
 Convolution and correlation
 sampling
22. What are the properties of Haar transform.
1. Haar transform is real and orthogonal.
2. Haar transform is a very fast transform
3. Haar transform has very poor energy compaction for images
4. The basic vectors of Haar matrix sequensly ordered.
23. What are the Properties of Slant transform
1. Slant transform is real and orthogonal.
2. Slant transform is a fast transform
3. Slant transform has very good energy compaction for images
4. The basic vectors of Slant matrix are not sequensely ordered.
24. Write the properties of Singular value Decomposition(SVD)?
 The SVD transform varies drastically from image to image.
9

 The SVD transform gives best energy packing efficiency for any given image.
 The SVD transform is useful in the design of filters finding least square,minimum
solution of linear equation and finding rank of large matrices.
25. Write the application of sharpening filters
1. Electronic printing and medical imaging to industrial application
2. Autonomous target detection in smart weapons.
26. Define Gray-level interpolation
Gray-level interpolation deals with the assignment of gray levels to pixels in the
spatially transformed image.
27. Find the number of bits required to store a 256 X 256 image with 32 gray levels?
gray levels = 25 = 5 bits 256 * 256 * 5 = 327680 bits.
28. Write the expression to find the number of bits to store a digital image?
The number of bits required to store a digital image is b=M X N X k, When M=N, this
equation becomes b=N^2k
29. What do you meant by Zooming of digital images?
Zooming may be viewed as over sampling. It involves the creation of new pixel
locations and the assignment of gray levels to those new locations.
What do you meant by shrinking of digital images?
Shrinking may be viewed as under sampling. To shrink an image by one half, we delete
every row and column. To reduce possible aliasing effect, it is a good idea to blue an image
slightly before shrinking it.
PART B

1. Explain the types of gray level transformation used for image


enhancement.

Refer page no 105 in Digital Image processing by Rafael C. Gonzalez


2. What is histogram? Explain histogram equalization.

Refer page no 120 in Digital Image processing by Rafael C. Gonzalez


3. Discuss the image smoothing filter with its model in the spatial domain.

Refer page no 152 in Digital Image processing by Rafael C. Gonzalez


4. What are image sharpening filters? Explain the various types of it.

Refer page no 157 in Digital Image processing by Rafael C. Gonzalez


5. Explain spatial filtering in image enhancement.

Refer page no 144 in Digital Image processing by Rafael C. Gonzalez


6. Explain image enhancement in the frequency domain.

Refer page no199 in Digital Image processing by Rafael C. Gonzalez


7. Explain Homomorphic filtering in detail.

Refer page no 289 in Digital Image processing by Rafael C. Gonzalez


10

UNIT III -IMAGE RESTORATION AND SEGMENTATION

PART A

1. Give the difference between Enhancement and Restoration


Enhancement technique is based primarily on the pleasing aspects it might present to the
viewer. For example: Contrast Stretching.
Where as Removal of image blur by applying a deblurrings function is considered a restoration
technique
2. What is meant by Image Restoration?
Restoration attempts to reconstruct or recover an image that has been degraded by using
a clear knowledge of the degrading phenomenon.
3. What are the two properties in Linear Operator?
 Additivity
 Homogenity
4. How a degradation process is modeled?
A system operator H, which together with an additive white noise term _(x,y) a operates
on an input image f(x,y) to produce a degraded image g(x,y).
5. Explain homogenity property in Linear Operator?
H[k1f1(x,y)]=k1 H[f1(x,y)]
The homogeneity property says that, the response to a constant multiple of any input is
equal to the response to that input multiplied by the same constant.
6. Define circulant matrix?
A square matrix, in which each row is a circular shift of the preceding row and the first
row is a circular shift of the last row, is called circulant matrix.

7. What is meant by Noise probability density function?


The spatial noise descriptor is the statistical behavior of gray level values in the noise
component of the model.
8. Why the restoration is called as unconstrained restoration?
In the absence of any knowledge about the noise ‘n’, a meaningful criterion function is
to seek an f^ such that H f^ approximates of in a least square sense by assuming the noise term
is as small as possible. Where H = system operator. f^ = estimated input image. g = degraded
image.
9. Which is the most frequent method to overcome the difficulty to formulate the spatial
relocation of pixels?
The point is the most frequent method, which are subsets of pixels whose location in the
input (distorted) and output (corrected) imaged is known precisely.
10. What are the three methods of estimating the degradation function?
1. Observation
2. Experimentation
3. Mathematical modeling.
11

11. What are the types of noise models?


 Gaussian noise
 Rayleigh noise
 Erlang noise
 Exponential noise
 Uniform noise_ Impulse noise
12. Give the relation for Rayleigh noise?
Rayleigh noise: The PDF is P(Z)= 2(z-a)e-(z—a)2/b/b for Z>=a 0
for Z<a mean μ=a+ b/4 standard deviation _2=b(4-_)/4
13. Give the relation for Gamma noise?
Gamma noise: The PDF is P(Z)=ab zb-1 ae-az/(b-1) for Z>=0 0
for Z<0 mean μ=b/a standard deviation _2=b/a2
14. Give the relation for Exponential noise?
Exponential noise The PDF is P(Z)= ae-az Z>=0 0
Z<0 mean μ=1/a standard deviation _2=1/a2
15. Give the relation for Uniform noise?
Uniform noise: The PDF is P(Z)=1/(b-a) if a<=Z<=b 0
otherwise mean μ=a+b/2
standard deviation _2=(b-a)2/12
16. Give the relation for Impulse noise?
Impulse noise: The PDF is P(Z) =Pa for z=a Pb for z=b0 Otherwise
17. What is meant by blind image restoration?
Information about the degradation must be extracted from the observed image either
explicitly or implicitly. This task is called as blind image restoration.
18. What are the two approaches for blind image restoration?
(i) Direct measurement (ii)Indirect estimation
19. What is meant by direct measurement?
In direct measurement the blur impulse response and noise levels are first estimated
from an observed image where this parameter is utilized in the restoration.
20. What is blur impulse response and noise levels? Blur impulse response:
This parameter is measured by isolating an image of a suspected object within a picture.
Noise levels: The noise of an observed image can be estimated by measuring the image
covariance over a region of constant background luminance.
21. What is meant by indirect estimation?
Indirect estimation method employ temporal or spatial averaging to either obtain a
restoration or to obtain key elements of an image restoration algorithm.
22. What is segmentation?
Segmentation subdivides on image in to its constitute regions or objects. The level to
which the subdivides is carried depends on the problem being solved .That is segmentation
should when the objects of interest in application have been isolated.
23. Write the applications of segmentation.
* Detection of isolated points.
12

* Detection of lines and edges in an image.


24. What are the three types of discontinuity in digital image?
Points, lines and edges.
25. How the derivatives are obtained in edge detection during formulation?
The first derivative at any point in an image is obtained by using the magnitude of the
gradient at that point. Similarly the second derivatives are obtained by using the laplacian.
26. Write about linking edge points.
The approach for linking edge points is to analyze the characteristics of pixels in a small
neighborhood (3x3 or 5x5) about every point (x,y)in an image that has undergone edge
detection. All points that are similar are linked, forming a boundary of pixels that share some
common properties.
27. What are the two properties used for establishing similarity of edge pixels?
(1) The strength of the response of the gradient operator used to produce the edge pixel.
(2) The direction of the gradient.
28. What is edge?
An edge isa set of connected pixels that lie on the boundary between two regions edges
are more closely modeled as having a ramplike profile. The slope of the ramp is inversely
proportional to the degree of blurring in the edge.
29. What is meant by object point and background point?
To execute the objects from the background is to select a threshold T that separate these
modes. Then any point (x,y) for which f(x,y)>T is called an object point. Otherwise the point is
called background point.
30. What is global, Local and dynamic or adaptive threshold?
When Threshold T depends only on f(x,y) then the threshold is called global . If T
depends both on f(x,y) and p(x,y) is called local. If T depends on the spatial coordinates x and y
the threshold is called dynamic or adaptive where f(x,y) is the original image.
31. Define region growing?
Region growing is a procedure that groups pixels or subregions in to layer regions based
on predefined criteria. The basic approach is to start with a set of seed points and from there
grow regions by appending to each seed these neighbouring pixels that have properties similar
to the seed.
32. Specify the steps involved in splitting and merging?
Split into 4 disjoint quadrants any region Ri for which
P(Ri)=FALSE. Merge any adjacent regions Rj and Rk for
which P(RjURk)=TRUE. Stop when no further merging or
splitting is positive.
33. What is meant by markers?
An approach used to control over segmentation is based on markers. marker is a
connected component belonging to an image. We have internal markers, associated with objects
of interest and external markers associated with background.
13

34. What are the 2 principles steps involved in marker selection?


The two steps are
1. Preprocessing
2. Definition of a set of criteria that markers must satisfy.

PART B

1. What is the use of wiener filter in image restoration? Explain.

Refer page no 352 in Digital Image processing by Rafael C. Gonzalez


2. What is meant by inverse filtering? Explain.

Refer page no 351 in Digital Image processing by Rafael C. Gonzalez


3. Explain singular value decomposition and specify its properties.

Refer page no 335 in Digital Image processing by Rafael C. Gonzalez


4. Explain image degradation model /restoration process in detail.

Refer page no 312 in Digital Image processing by Rafael C. Gonzalez


5. What are the two approaches for blind image restoration? Explain in
detail.

Refer page no 362 in Digital Image processing by Rafael C. Gonzalez


6. Discuss about Constrained least square restoration for a digital image in
detail.

Refer page no 357 in Digital Image processing by Rafael C. Gonzalez


7. What is image restoration? Explain the degradation model for continuous
function in detail.

Refer page no 313 in Digital Image processing by Rafael C. Gonzalez


8. Discuss about region based image segmentation techniques. Compare
threshold region based techniques.
Refer page no 763 in Digital Image processing by Rafael C. Gonzalez
9. Explain the two techniques of region representation
Refer page no 738 in Digital Image processing by Rafael C. Gonzalez

10. Explain the segmentation techniques that are based on finding the regions directly.
Refer page no 738 in Digital Image processing by Rafael C. Gonzalez
14

UNIT IV -WAVELETS AND IMAGE COMPRESSION

PART A

1. What is image compression?


Image compression refers to the process of redundancy amount of data required to
represent the given quantity of information for digital image. The basis of reduction process is
removal of redundant data.
What is Data Compression?
Data compression requires the identification and extraction of source redundancy. In
other words, data compression seeks to reduce the number of bits used to store or transmit
information.
What are two main types of Data compression?
(i) Lossless compression can recover the exact original data after compression. It is
used mainly for compressing database records, spreadsheets or word processing files, where
exact replication of the original is essential.
(ii) Lossy compression will result in a certain loss of accuracy in exchange for a
substantial increase in compression. Lossy compression is more effective when used to
compress graphic images and digitised voice where losses outside visual or aural perception
can be tolerated.
What is the need for Compression?
In terms of storage, the capacity of a storage device can be effectively increased with
methods that compress a body of data on its way to a storage device and decompresses it when
it is retrieved. In terms of communications, the bandwidth of a digital communication link can
be effectively increased by compressing data at the sending end and decompressing data at the
receiving end.
At any given time, the ability of the Internet to transfer data is fixed. Thus, if data can
effectively be compressed wherever possible, significant improvements of data throughput can
be achieved. Many files can be combined into one compressed document making sending
easier.
2. What are different Compression methods?
Run Length Encoding (RLE)
Arithmetic coding
Huffman coding
and Transform
coding

Define is coding redundancy?


If the gray level of an image is coded in a way that uses more code words than
necessary to represent each gray level, then the resulting image is said to contain coding
redundancy.
Define interpixel redundancy?
15

The value of any given pixel can be predicted from the values of its neighbors. The
information carried by is small. Therefore the visual contribution of a single pixel to an image
is redundant. Otherwise called as spatial redundant geometric redundant or
What is run length coding?
Run-length Encoding, or RLE is a technique used to reduce the size of a repeating string
of characters. This repeating string is called a run; typically RLE encodes a run of symbols into
two bytes, a count and a symbol. RLE can compress any type of data regardless of its
information content, but the content of data to be compressed affects the compression ratio.
Compression is normally measured with the compression ratio:
Define compression ratio.
Compression Ratio = original size / compressed size: 1
Define psycho visual redundancy?
In normal visual processing certain information has less importance than other
information. So this information is said to be psycho visual redundant.
Define encoder
Source encoder is responsible for removing the coding and interpixel redundancy and
psycho visual redundancy.
There are two components A) Source Encoder B) Channel Encoder
Define source encoder
Source encoder performs three operations
1) Mapper -this transforms the input data into non-visual format. It reduces the interpixel
redundancy.
2) Quantizer - It reduces the psycho visual redundancy of the input images .This step is
omitted if the system is error free.
3) Symbol encoder- This reduces the coding redundancy .This is the final stage of encoding
process.
Define channel encoder
The channel encoder reduces reduces the impact of the channel noise by inserting
redundant bits into the source encoded data. Eg: Hamming code
What are the types of decoder?
Source decoder- has two components
a) Symbol decoder- This performs inverse operation of symbol encoder.
b) Inverse mapping- This performs inverse operation of mapper. Channel decoder-this is
omitted if the system is error free.
What are the operations performed by error free compression?
1) Devising an alternative representation of the image in which its interpixel redundant are
reduced.
2) Coding the representation to eliminate coding redundancy
What is Variable Length Coding?
Variable Length Coding is the simplest approach to error free compression. It reduces
only the coding redundancy. It assigns the shortest possible codeword to the most probable gray
levels.
16

Define Huffman coding


Huffman coding is a popular technique for removing coding redundancy.
When coding the symbols of an information source the Huffman code yields the
smallest possible number of code words, code symbols per source symbol.
Define Block code
Each source symbol is mapped into fixed sequence of code symbols or code words. So it
is called as block code.
Define instantaneous code
A code word that is not a prefix of any other code word is called instantaneous or prefix
codeword.
Define uniquely decodable code
A code word that is not a combination of any other codeword is said to be uniquely
decodable code.
What is bit plane Decomposition?
An effective technique for reducing an image’s interpixel redundancies is to process the
image’s bit plane individually. This technique is based on the concept of decomposing
multilevel images into a series of binary images and compressing each binary image via one of
several well-known binary compression methods.
22. Define B2 code
Each code word is made up of continuation bit c and information bit which are binary
numbers. This is called B2 code or B code. This is called B2 code because two information bits
are used for continuation bits
23. How effectiveness of quantization can be improved?
Introducing an enlarged quantization interval around zero, called a dead zero.
Adapting the size of the quantization intervals from scale to scale. In either case, the
selected quantization intervals must be transmitted to the decoder with the encoded
image bit stream.
24. What are the coding systems in JPEG?
1. A lossy baseline coding system, which is based on the DCT and is adequate for
most compression application.
2. An extended coding system for greater compression, higher precision or progressive
reconstruction applications.
3. A lossless independent coding system for reversible compression.
25. What is JPEG?
The acronym is expanded as "Joint Photographic Expert Group". It is an international
standard in 1992. It perfectly Works with color and grayscale images, Many applications e.g.,
satellite, medical.
26. What are the basic steps in JPEG?
The Major Steps in JPEG Coding involve:
4. DCT (Discrete Cosine Transformation)
17

5. Quantization
6. Zigzag Scan_ DPCM on DC component
7. RLE on AC Components
8. Entropy Coding
27. What is MPEG?
The acronym is expanded as "Moving Picture Expert Group". It is an international
standard in 1992. It perfectly Works with video and also used in teleconferencing Input image
Wavelet transform.
28. What is zig zag sequence?
The purpose of the Zig-zag Scan: To group low frequency coefficients in top of vector.
Maps 8 x 8 to a 1 x 64 vector
Define I-frame
I-frame is Intraframe or Independent frame. An I-frame is compressed independently of
all frames. It resembles a JPEG encoded image. It is the reference point for the motion
estimation needed to generate subsequent P and P-frame.
Define P-frame
P-frame is called predictive frame. A P-frame is the compressed difference between the
current frame and a prediction of it based on the previous I or P-frame

Define B-frame
B-frame is the bidirectional frame. A B-frame is the compressed difference between the
current frame and a prediction of it based on the previous I or P-frame or next P-frame.
Accordingly the decoder must have access to both past and future reference frames.
PART B

1.What is data redundancy? Explain three basic data redundancy?

Refer page no 542 in Digital Image processing by Rafael C. Gonzalez


2.Explain about Multiresolution Expansions in detail.

Refer page no 477 in Digital Image processing by Rafael C. Gonzalez


3.What is image compression? Explain any four variable length coding
compression schemes.

Refer page no 526 in Digital Image processing by Rafael C. Gonzalez


4. Explain about Image compression model?

Refer page no 536 in Digital Image processing by Rafael C. Gonzalez


5. The source Encoder and Decoder

Refer page no 536 in Digital Image processing by Rafael C. Gonzalez


6. The channel Encoder and Decoder
18

Refer page no 536 in Digital Image processing by Rafael C. Gonzalez


7. Explain about Error free Compression?

Refer page no 550 in Digital Image processing by Rafael C. Gonzalez


8. Explain about Lossy compression?

Refer page no 560 in Digital Image processing by Rafael C. Gonzalez


9. Differentiate between lossless and lossy compression and explain transform
coding system with a neat diagram.

Refer page no 538 in Digital Image processing by Rafael C. Gonzalez

UNIT V- IMAGE REPRESENTATION AND RECOGNITION

PART A

1. Define chain codes?


Chain codes are used to represent a boundary by a connected sequence of straight line
segment of specified length and direction. Typically this representation is based on 4 or 8
connectivity of the segments. The direction of each segment is coded by using a numbering
scheme.
2. What are the demerits of chain code?
 The resulting chain code tends to be quite long.
 Any small disturbance along the boundary due to noise cause changes in the
code that may not be related to the shape of the boundary.
3. What is thinning or skeletonizing algorithm?
An important approach to represent the structural shape of a plane region is to reduce it
to a graph. This reduction may be accomplished by obtaining the skeletonizing algorithm. It
play a central role in a broad range of problems in image processing, ranging from automated
inspection of printed circuit boards to counting of asbestos fibers in air filter.
4. Specify the various image representation approaches
 Chain codes
 Polygonal approximation
 Boundary descriptors.
5. What is polygonal approximation method?
Polygonal approximation is a image representation approach in which a digital boundary
can be approximated with arbitrary accuracy by a polygon. For a closed curve the
approximation is exact when the number of segments in polygon is equal to the number of
points in the boundary so that each pair of adjacent points defines a segment in the polygon.
6. Specify the various polygonal approximation methods
Minimum perimeter
polygons merging
techniques
19

Splitting techniques
7. Name few boundary descriptors
Simple
descriptors
Shape numbers
Fourier
descriptors
8. Give the formula for diameter of boundary
The diameter of a boundary B is defined as Diam (B) =max [D
(pi, pj)] i, j D-distance measure pi, pj-points on the boundary
9. Define length of a boundary.
The length of a boundary is the number of pixels along a boundary.Eg.for a chain coded
curve with unit spacing in both directions the number of vertical and horizontal components
plus _2 times the number of diagonal components gives its exact length.
10. Define eccentricity and curvature of boundary
Eccentricity of boundary is the ratio of the major axis to minor axis. Curvature is the
rate of change of slope.
11. Define shape numbers
Shape number is defined as the first difference of smallest magnitude. The order n of a
shape number is the number of digits in its representation.
12. Give the Fourier descriptors for the following transformations
(1)Identity (2) Rotation (3) Translation (4)Scaling (5)Starting point
13. Specify the types of regional descriptors
Simple
descriptors
Texture
14. Name few measures used as simple descriptors in region descriptors
Area Perimeter Compactness
Mean and median of gray levels Minimum and maximum of gray levels
Number of pixels with values above and below mean
15. Define compactness
Compactness of a region is defined as (perimeter)^2/area.It is a dimensionless quantity
and is insensitive to uniform scale changes.
16. Describe texture
Texture is one of the regional descriptors. It provides measures of properties such as
smoothness, coarseness and regularity. There are 3 approaches used to describe texture of a
region.
They are (i) structural (ii) spectral (iii) statistical
17. Describe statistical approach
20

Statistical approaches describe smooth, coarse, grainy characteristics of texture.This are


the simplest one compared to others. It describes texture using statistical moments of the gray-
level histogram of an image or region.
18. Define gray-level co-occurrence matrix.
A matrix C is formed by dividing every element of A by n(A is a k x k matrix and n is
the total number of point pairs in the image satisfying P(position operator). The matrix C is
called gray-level co- occurrence matrix if C depends on P, the presence of given texture
patterns may be detected by choosing an appropriate position operator.
19. Explain structural and spectral approach
Structural approach deals with the arrangement of image primitives such as description
of texture based on regularly spaced parallel lines.
Spectral approach is based on properties of the Fourier spectrum and is primarily to
detect global periodicity in an image by identifying high energy, narrow peaks in spectrum.
There are 3 features of Fourier spectrum that are useful for texture description.
They are:
Prominent peaks in spectrum gives the principal direction of texture patterns.
The location of peaks in frequency plane gives fundamental spatial period of
patterns. Eliminating any periodic components by our filtering leaves non-
periodic image elements.
20. Define pattern.
A pattern is a quantitative or structural description of an objective or some other entity of
interest in an image,
21. Define pattern class.
A pattern class is a family of patterns that share some common properties .Pattern classes
are denoted w1, w2, ----wm, where M is the number of classes.
22. List the three pattern arrangements.
Vectors Strings
Treestching
23. Give the decision theoretic methods.
Matching-Matching by minimum distance classifier
matching by correlation
24. Define training pattern and training set.
The patterns used to estimate the parameters are called training patterns, and a set of such
patterns from each class is called a training set.
25. Define training
The process by which a training set is used to obtain decision functions is called
learning or training.
21

26. What are the layers in back propagation network?


Input layer, Hidden layer and out put layer

PART B

1. Define and explain the various representation approaches?

Refer page no 796 in Digital Image processing by Rafael C. Gonzalez


2. Polygon approximations

Refer page no 801 in Digital Image processing by Rafael C. Gonzalez


3. Explain Boundary descriptors in detail with a neat diagram.

Refer page no 815 in Digital Image processing by Rafael C. Gonzalez


4. Explain regional descriptors.

Refer page no 822 in Digital Image processing by Rafael C. Gonzalez


5. Explain about patterns and pattern classes.

Refer page no 861 in Digital Image processing by Rafael C. Gonzalez


6. Explain about Recognition based on matching.

Refer page no 903 in Digital Image processing by Rafael C. Gonzalez

You might also like