Download as pdf or txt
Download as pdf or txt
You are on page 1of 79

DIGITAL IMAGE PROCESSING

Dr. Zohair Al-Ameen

PART 1 –DIGITAL IMAGE PROCESSING


ESSENTIALS
What Is Digital Image Processing?
• A digital image may be defined as a two-dimensional function f (x, y),where x and y
are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates
(x, y) is called the intensity of the image.
• A digital image is a two-dimensional array that contains a finite number of digital
values called pixels.
• Pixel values typically represent gray levels or colors.
• Remember digitization implies that a digital image is an approximation of a real scene
because the real world is continuous.

1 pixel
Image Representation Inside The Computer
What Is Digital Image Processing?
• Digital Image Processing (DIP) is the use of computer
algorithms to process digital images.
• Digital image processing encompasses three types of
computerized processes in this field: low-, mid-, and high-
level processes.
Low Level Process Mid Level Process High Level Process
Input: Image Input: Image Input: Attributes
Output: Image Output: Attributes Output: Understanding
Examples: Noise Examples: Object Examples: Scene
removal, image recognition, understanding,
sharpening segmentation independent navigation
History of Digital Image Processing

• Early 1920s: One of the first applications of digital


imaging was in the newspaper industry.
• The Bartlane cable for picture
transmission service.
• Images were transferred by underwater
cable between London and New York.
Early digital image
• Pictures were coded for cable transfer
and reconstructed at the receiving end on a telegraph
printer.
History of DIP (cont…)

• Mid to late 1920s: Improvements to the Bartlane


system resulted in higher quality images
• New reproduction
processes based
on photographic
techniques
• Increased number
of tones in Improved digital image
Early 15 tone digital image
reproduced images
History of DIP (cont…)

• 1960s: Improvements in computing technology and the


beginning of the space race led to a surge of work in digital
image processing
• 1964: Computers used to
improve the quality of
images of the moon taken
by the Ranger 7 probe
• Such techniques were used
in other space missions
A picture of the moon taken by the
including the Apollo landings Ranger 7 probe minutes before
landing
History of DIP (cont…)

• 1970s: Digital image processing begins to be used


in medical applications
• 1979: Sir Godfrey N.
Hounsfield & Prof. Allan M.
Cormack share the Nobel
Prize in medicine for the
invention of tomography,
the technology behind
computed tomography (CT) scans
A typical Computed
Tomography (CT) image for a
human brain
State-of-the-art Examples of DIP

• 1980s - Today: The use of digital image processing


techniques has exploded and they are now used for
all kinds of tasks in all kinds of areas
• Image Enhancement / Restoration
• Space Imaging
• Medical Imaging
• Geographic Information Systems
• Law Enforcement
… and many more
Examples: Image Enhancement / Restoration
• One of the most common uses of DIP
techniques: Image Intensity Enhancement,
Image Denoising and Image Deblurring.
Examples: Space Imaging
• Launched in 1990 the Hubble
telescope can take images of
very distant objects
• However, an incorrect mirror
made many of Hubble’s
images useless
• Image processing
techniques were
used to fix this
Examples: Medical Imaging
• Take slice from MRI scan of dog heart, and
find boundaries between types of tissue
• Image with gray levels representing tissue density
• Use a suitable filter to highlight edges

Original MRI Image of a Dog Heart Edge Detection Image


Examples: GIS
• Geographic Information Systems (GIS)
• Digital image processing techniques are used
extensively to manipulate satellite imagery
• Land classification
• Weather forecasting
Examples: Law Enforcement

• Image processing techniques are


used extensively by law enforcers
• Number plate recognition for speed
cameras/automated toll systems
• Fingerprint recognition
• Enhancement of CCTV images
Components of an Image Processing System
Components of a
typical general-
purpose system
used for digital
image processing
Components of an Image Processing System
• Specialized image processing hardware usually consists of the digitizer plus
hardware that performs other primitive operations, such as an arithmetic
logic unit (ALU).
• The computer in an image processing system is a general-purpose computer
and can range from a PC to a supercomputer.
• Software for image processing consists of specialized modules that perform
specific tasks.
• Mass storage capability is a must in image processing applications.
• Image displays in use today are mainly color (preferably flat screen) TV
monitors.
• Hardcopy devices for recording images include laser printers and DVDs.
• Networking is almost a default function in any computer system in use today.
Image Data Types
• Binary Images
• are2-D arrays that assign one numerical value from the set {0, 1} to
each Pixel in the image.
• These are sometimes referred to as logical images: black
corresponds to zero (an ‘off’ or ‘background’ pixel) and white
corresponds to one (an ‘on’ or foreground’ pixel).
• As no other values are permissible, these images can be
represented as a simple bit-stream, but in practice they are
represented as 8-bit integer images in the common image
formats.
• A fax image is an example of a binary image.
Image Data Types
• Binary Images
Image Data Types
• Intensity or grey-scale images
• are 2-D arrays that assign one numerical value to each pixel which is
representative of the intensity at this point. (0 to 255)
• the pixel value range is bounded by the bit resolution of the image and such
images are stored as N-bit integer images with a given format.
Image Data Types
• RGB or true-colour images
• are 3-D arrays that assign three numerical values to each
pixel, each value corresponding to the red, green and blue
(RGB) image channel component.
• Conceptually, we may consider them as three distinct, 2-D
planes so that they are of dimension C by R by 3, where R
is the number of image rows and C the number of image
columns.
Image Data Types
• RGB or true-colour images
Image File Formats
• In practice, image data must first be loaded into memory
from a file.
• Files provide the essential mechanism for storing,
archiving, and exchanging image data, and the choice of
the correct file format is an important decision.
• Today there exist a wide range of standardized file
formats.
• Using standardized file formats vastly increases the ease
with which images can be exchanged and the likelihood
that the images will be readable by other software in the
long term.
Image File Formats
• Raster versus Vector Data
• Raster images: contain pixel values arranged in a regular matrix using discrete
coordinates.
• Vector images represent geometric objects using continuous coordinates,
which are only rasterized once they need to be displayed on a physical device
such as a monitor or printer.
• A number of standardized file formats exist for vector images, such as:
• CGM (Computer Graphics Metafile).
• SVG (Scalable Vector Graphics).
• DXF (Drawing Exchange Format from AutoDesk).
• AI (Adobe Illustrator).
• PICT (Quick Draw Graphics Meta file from Apple) .
• WMF/EMF (Windows Metafile and Enhanced Metafile from Microsoft).
Image File Formats
• Raster File Formats
• Tagged Image File Format (TIFF)
• This is a widely used and flexible file format designed to meet the
Professional needs of diverse fields.
• It was originally developed by Aldus and later extended b y
Microsoft and currently Adobe. The format supports a range of
grayscale and true color images.
• The TIFF specification provides a range of different compression
methods (LZW, ZIP, CCITT, and JPEG).
• Used in archiving documents, scientific applications, digital
photography, and digital video production.
Image File Formats
• Raster File Formats
• Graphics Interchange Format (GIF)
• Was originally designed by CompuServe in 1986.
• One of the most widely used formats for representing images on the Web.
• This popularity is largely due to its early support for indexed color at multiple bit depths,
LZW compression, interlaced image loading, and ability to encode simple animations by
storing a number of images in a single file for later sequential display.
• GIF is essentially an indexed image file format designed for color and gray scale images with
a maximum depth of 8 bits and consequently it does not support true color images.
• It offers efficient support for encoding palettes containing from 2 to 256 colors, one of which
can be marked for transparency.
• The GIF file format is designed to efficiently encode “flat” or “iconic” images.
• It uses a lossless LZW compression to efficiently encode large areas of the same color.
• the PNG format is preferred, as it outperforms GIF by almost every metric.
Image File Formats
• Raster File Formats
• Portable Network Graphics (PNG)
• was originally developed as a replacement for the GIF file format because of its use of LZW
compression.
• It was designed as a universal image format especially for use on the Internet.
• PNG supports three different types of images:
• true color (with up to 3 × 16 bits/pixel)
• grayscale (with up to 16 bits/pixel)
• indexed (with up to 256 colors)
• PNG includes an alpha channel for transparency with a maximum depth of 16 bits.
• it allows images of up to 2^30×2^30 pixels.
• The format supports lossless compression by means of a variation of PKZIP.
• the PNG format meets or exceeds the capabilities of the GIF format.
Image File Formats
• Raster File Formats
• Joint Photographic Experts Group (JPEG)
• Today it is the most widely used image file format.
• Can be used for grayscale and color images.
• Was established in 1990 as ISO Standard IS-10918.
• JPEG achieves, depending on the application, compression in the order of 1
bit per pixel.
• The JPEG compression method combines a number of different compression
methods and is quite complex in its entirety.
• Drawbacks of the JPEG compression algorithm include its limitation to 8-bit
images, its poor performance on non-photographic images such as line art.
Image File Formats

•Raster File Formats


• Windows Bitmap (BMP)
• simple, and under Windows widely used.
• file format supporting grayscale, indexed, and true color
images.
• It also supports binary images, but not in an efficient
manner since each pixel is stored using an entire byte.
• Optionally, the format supports simple lossless, run
length-based compression.
Computer Representation of Images
• Digital Images can be represented by a two-dimensional
array of pixels.
• Digital Images have an MxN Size.
• The gray level set is {0 to L-1}, where L is the number of
gray levels
• If 8 bits are used, L is 256.
• Remember: Even if two images have the same number of
pixels, the quality of the images may differ in quality due
to differences in how the images are captured.
Computer Representation of Images
Understanding Matrices (Matrix Size)
• a matrix is a 2D array of numbers, symbols, or expressions, arranged in rows
and columns.
• The items in a matrix are called its matrix elements or entries.
• An example of a matrix with 10 rows and 10 columns is
Understanding Matrices (Matrix Size)
• Size is the number of rows and columns a Matrix contains.
• A matrix with m rows and n columns is called an m × n matrix.
• m and n are called its dimensions.
• The array that owns a single row is called a row vector.
• The array that owns a single column is called a column vector.
• Square matrix is a matrix which has the same number of rows
and columns.
• Zero matrix is a matrix that all its elements are Zeros.
Understanding Matrices (Matrix Size)
Understanding Matrices (Matrix Notation)
• Matrices are commonly written in box brackets or large
parentheses.
• Matrices are usually symbolized using upper-case letters
(A ), while the corresponding lower-case letters, with two
subscript indices (e.g., a11, or a1,1), represent the entries.
• Aij refers to the number in the i-th row and j-th column.
Basic operations on matrices
• Matrices of the same size can be added or
subtracted element by element.
• Matrices can be multiplied in an (element-wise)
way.
Basic operations on matrices
Complex operations on matrices
• Convolution and Correlation between two Matrices
• Convolution and Correlation in Image Processing is quite similar except that
the kernel are in the convolution process is rotated 180 degree between these
2 operation, while in the correlation the kernel stays the same.
• Usually, the convolution and correlation occur between an input matrix
(image) and a kernel.
• In general, the size of output matrix is getting bigger than input matrix.
• the size of output is fixed as same as input size in most image processing.
• If the kernel is centered (aligned) exactly at the sample that we are interested
in, multiply the kernel data by the overlapped input data.
• The accumulation (adding these 9 multiplications) is the last thing to do to
find out the output value.
Complex operations on matrices
• Convolution and Correlation between two Matrices
• Example:
Complex operations on matrices
• Convolution and Correlation between two Matrices
Complex operations on matrices
• Convolution and Correlation between two Matrices
Complex operations on matrices
• Convolution and Correlation between two Matrices
Image Cropping
• Cropping is the removal of the boundary parts of an image
to improve framing, accentuate subject matter or change
aspect ratio.
• Cropping a poor image won't make it better but cropping
a good image would improve it.
• The unwanted part of the image is discarded.
• Image cropping does not reduce the resolution of the area
cropped.
• Best results are obtained when the original image has a
high resolution.
Image Cropping
Image Negation
• The negative of an image is obtained by the negative
transformation which is given by :
s = (L – 1) – r
• This type of processing is particularly suited for enhancing
white or gray detail embedded in dark regions of an
image, especially when the black areas are dominant in
size.
Image Negation

(A)Normal color image


(B)The negative of image A
(C)Normal grayscale image
(D)The negative of image C
Understanding the frequency domain
• Spatial Domain:
• Normal image space.
• Changes in pixel positions correspond to changes in the scene.
• Directly process the input image pixel array.

• Frequency Domain:
• Transform the image to its frequency representation using Fourier transformation.
• Perform the processing on complex numbers.
• Compute inverse transform back to the spatial domain using an inverse Fourier
transformation.

• Property of transforms: they convert a function from one domain to another


with no loss of information.
• Fourier Transform: converts a function from the spatial domain to the
frequency domain.
Understanding the frequency domain
• The Fourier transform was named after Joseph Fourier the French scientist.
• Fourier was born in France in 1768.
• Most famous for his work “La Théorie Analitique de la Chaleur” published in
1822. Then, translated into English in 1878: “The Analytic Theory of Heat”.
• Nobody paid much attention when the work was first published.
• One of the most important mathematical theories in modern engineering.
The Fourier transform is a mathematical transform with many
applications in physics and engineering.
Understanding the frequency domain
• Spatial domain: tells us how properties change over time.
• Frequency domain: tells us how properties (amplitudes) change over
frequencies.
• The term "Fourier transform" refers to both the transform operation and to
the complex-valued function it produces.
Understanding the frequency domain
• Any function that periodically repeats itself can be expressed as a sum of sines
and cosines of different frequencies each multiplied by a different coefficient
– a Fourier series
• In picture on right:
• High frequencies: (Near center), Low frequencies: (Corners)

DFT

Scanning electron microscope Fourier spectrum of the image


image of an integrated circuit
magnified ~2500 times
The Discrete Fourier Transform (DFT)

The Discrete Fourier Transform of f(x, y), for x = 0, 1, 2…M-1 and y =


0,1,2…N-1, denoted by F(u, v), is given by the equation:

M −1 N −1
F (u ,v ) =  f
x =0 y =0
(x , y )e − j 2 (ux / M +vy / N )

for u = 0, 1, 2…M-1 and v = 0, 1, 2…N-1.


The Inverse DFT

It is important to mention that the Fourier transform is completely


reversible.
The inverse DFT is given by:
M −1 N −1
1
f (x , y ) =
MN
  F (u ,v )e
u =0 v =0
j 2 (ux / M +vy / N )

for x = 0, 1, 2…M-1 and y = 0, 1, 2…N-1


Understanding the frequency domain
• Complex Numbers
• A Complex Number is a combination of:
• a Real Number: Real Numbers are just numbers like:
• 1 12.38 -0.8625 3/4 √2 1998

• Nearly any number you can think of is a Real Number


• an Imaginary Number: Imaginary Numbers are special because:
• When squared, they give a negative result.

• Normally this doesn't happen, because:


• when you square a positive number you get a positive result, and
• when you square a negative number you also get a positive result (because a negative
times a negative gives a positive).
Understanding the frequency domain
• Complex Numbers
• But just imagine there is such a number, because we are going to need it!
• The "unit" imaginary number (like 1 for Real Numbers) is (i), which is the
square root of -1.
Understanding the frequency domain
• Complex Numbers
• A Complex Number is a combination of a Real Number and an Imaginary
Number.
Understanding the frequency domain
• Complex Numbers
Understanding the frequency domain
• Complex Numbers
• Complex does not mean complicated.
• It means the two types of numbers, real and imaginary, together form a
complex, just like you might have a building complex (buildings joined
together).
Understanding the frequency domain
• Complex Numbers (Multiplication)
Understanding the frequency domain
• Complex Numbers
• Always remember !!!

•i =
2 -1
• Multiplying By the Conjugate:
• The rule:
• (a + bi)(a - bi) = a2 + b2

• Example:
• (4 - 5i)(4 + 5i) = 42 + 52
Image Degradations
• Are the factors that reduces the image quality and
make it appear as a different version of the original
scene.
• Image degradations are caused by many factors.
• Imaging systems may introduce some amounts of
distortions.
• Image quality is a characteristic of an image that
measures the apparent image degradation
(typically, compared to an ideal or perfect image).
Image Degradations
• The observed image can be summarized as a number of key
elements.
• In general, a digital image (s) can be formalized as a mathematical
model:
• The original scene (o)
• The point spread function (PSF).
• The additive noise n.
• The result is the observed image.
Image Degradations
• PSF: this describes the way information on the
object function is spread as a result of recording the
data.
• Original Scene: this describes the object (or scene)
that is being imaged.
• Noise: is the unwanted component of the image.
• Convolution operator (*): A mathematical operation
which ‘smears’ (i.e. convolves) one function with
another.
Image Degradations
• Image noise is a random variation of brightness or color
information in images.
• Three famous types of noise: (Additive White Gaussian noise, Salt-and-
pepper noise and Colored noise).
• Image blur makes the image appear unclear.
• Two famous types of blur: (Motion and Gaussian Blur).
• Image enhancement is basically improving the perception of
information in images for human viewers by providing `better'
illuminated results.
• Examples: Low/High Contrast , Low/High Brightness, Color imperfections.
Image Noise
• Noise in general happen due to different reasons:
1. Hardware and software errors.
2. Sensor errors.
3. Analog-to-digital converter errors
4. transmission errors.
5. Variation in the number of photons sensed at a given exposure level.
6. Low photons absorption.
7. low light environments.
8. longer shutter speeds lead to increased noise.
9. The size of the image sensor. (larger sensors typically create lower noise images than
smaller sensors).
10. Temperature can also have an effect on the amount of noise produced by an image
sensor. (more noise is produced during summer than winter).
Image Noise (Examples)

From left to right


1st image: clean (no apparent noise)
2nd image: has additive white Gaussian noise
3rd image: has salt and pepper noise
Image Noise (Examples)

Colored Noise
Image Blur
• Blur in general happen due to different reasons:
1. The subject moves while the shutter is open.
2. The camera moves while the shutter is open.
3. Imaging a long-distance object (the atmosphere).
4. Imperfect resolution of the imaging system.
5. Losing information during the acquisition process.
6. Slow Shutter Speed.
7. Employing low-pass filters for reducing noise leads to
blur amplification.
Image Blur (Examples)

Gaussian Blur
Image Blur (Examples)

Motion Blur
Image Blur (Examples)

Motion Blur
Image Enhancement
• Image enhancement is among the simplest and
most appealing areas of digital image processing.
• Basically, the idea behind enhancement techniques
is to bring out detail that is obscured, or simply to
highlight certain features of interest in an image.
• A familiar example of enhancement is when we
increase the contrast of an image because “it looks
better.”
Image Enhancement
• Brightness is an attribute of visual perception in which a source appears to be
radiating or reflecting light.
• Contrast describes tones, specifically the relationship between the darkest
and brightest parts of an image.
• Contrast is the different between the minimum and maximum pixel values in
an image.
• High difference → image has high contrast.
• Low difference → image has low contrast.
• High contrast images are perceived better.
• Low contrast images have a hazy look.

• Color is the term that describes every tint, tone, hue or shade that the human
eye can see, including white, black, and gray.
Image Enhancement (Examples)

Bright image Dark image


Normal grayscale image

High-contrast image Low-contrast image


Image Enhancement
• Hue refers only to the pure spectrum color names found on the
color wheel: red, orange, yellow, blue, green, and violet.
• Modifying the hue allows you to shift the color values of your images.
• Saturation describes the depth or intensity of color present within
an image.
• Saturation is also referred to as “chroma”.

• The more saturated an image is the more colorful and vibrant it


will appear, less color saturation will make an image appear
subdued or muted.
• Black and white images contain no color saturation, instead being
rendered in greyscale tones.
Image Enhancement
• Over Saturation:
• Over-saturated photos look vivid and intense in color.

• But there is a loss of detail in color-highlighted areas that makes color contrast too strong and uncomfortable.

• E.g., over saturation of skin tones leaves them looking too orange and unnatural.

• Over saturation can happen with only one color channel (e.g., red color is extremely eye-catching).

• Some over saturation happens intentionally by post-processing, stylization, or high-dynamic-range (HDR) imaging techniques.

• Under Saturation:
• Under-saturated photos look muted, blended, and close to gray color.

• There is a loss of color information that leads to old, faded, inactive, or washed-out styles.

• Grayscale photos are an extreme case of under saturation.

• Most under saturation happens undesirably due to:


• Poor camera quality
• Taking the photo through a dirty / turbid medium (e.g., glass, water)
• Haze or smoke
• Sometimes under saturation can happen intentionally for stylization.
Image Enhancement (Examples)

Low-Hue Image High-Hue Image


Normal Colored Image

Low - Saturation High - Saturation


Image Histogram (Grayscale image)
• Histogram: is a graphical representation of the number of pixels in an image.

• Image histogram is a type of histogram that acts as a graphical representation of the


tonal distribution in a digital image.

• It plots the number of pixels for each tonal value.

• By looking at the histogram for a specific image a viewer will be able to judge the entire
tonal distribution at a glance.

• The horizontal axis of the graph represents the tonal variations, while the vertical axis
represents the total number of pixels in that particular tone.

• The left side of the horizontal axis represents the dark areas, the middle represents mid-
tone values and the right hand side represents light areas.

• The vertical axis represents the size of the area (total number of pixels) that is captured
in each one of these zones.
Image Enhancement (Examples with Histograms)
Histogram (Color Image)
• Color histogram is a representation of the distribution of colors in an image.
• For digital images, a color histogram represents the number of pixels that have
colors in each of a fixed list of color ranges, that span the image's color space,
the set of all possible colors.
• The color histogram can be built for any kind of color space, although the term is
more often used for three-dimensional spaces like RGB or HSV.
THE END

You might also like