Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 36

Chapter Four

Digital Image Processing


 These lecture will cover
What is image???
 What is digital image processing?
 Key stages in digital image processing
What is image
 Image is a 2D representation of objects in real scene.
 An image refers to a 2D light intensity function f(x,y),
 where (x,y) denote spatial coordinates and the value of f at
any point (x,y) is proportional to the brightness or gray
levels of the image at that point.
 Is a 2D array of pixel with each pixel having unsigned value
b/n 0 and 255
 Pixel: The elements of a digital image.
 A natural image is a continuous, 2-dimensional distribution
of brightness (or some other physical effect).
 Conversion of natural images into digital form involves two
key processes, jointly referred to as digitization:
What is a Digital Image?
 Is a representation of a 2D image as a finite set of digital
values, called picture elements or pixels
 Is an array of number, depicting spatial distribution of field
or parameter.
 Is digital representation in the form of row and column.
 In digital image, each unit in an image is represented by an
integer called DN.
What is Digital Image Processing?
Computer-Assisted Scene Interpretation (CASI); also called
Image Processing
The study of an algorithm that takes an image as input and
return a vector(features)/matrix(image) as output
 Is enhancing an image or extracting information or features
from an image
 manipulation and interpretation of digital images
 Computerized routines for information extraction (eg,
pattern recognition, classification) from remotely sensed
images to obtain categories of information about specific
features
 it often involves procedures that can be mathematically
complex.
 It comprises hard ware, software, data
A Discipline in Which Both the Input and Output of a Process are
Images.

Image Process Image

Digital image processing focuses on two major tasks


– Improvement of pictorial information for human interpretation
– Processing of image data for storage, transmission and
representation for autonomous machine perception
Some argument about where image processing ends and fields such as
image analysis and computer vision start
Types of image processing
 Analog image processing
 Digital image processing.
Visual or Analog processing
 Applied to hard copy data or print
 Adopts certain elements of interpretation
Why do we need image
processing??
 It is motivated by the following major application
 Improvements of pictorial information for human
perception
 Image processing for autonomous machine
application
 Efficient storage and transmission
 Digital Image Processing undergoes three general steps:
• Pre-processing(Image Restoration and Rectification )
• Display and enhancement
• Information extraction(classification)
 Image Pre-Processing
 Every “raw” remotely sensed image contains a number of
artifacts and errors.
 Correcting such errors and artifacts is termed preprocessing.
 The term comes from the fact that PRE-processing is
required for a correct PROCESSING to take place.
 The boundary line between pre-processing and
processing is often fuzzy.
 Image rectification and restoration.
 Create a more faithful representation through:
– Geometric correction
– Radiometric correction
– Atmospheric correction
 Can also make it easier to interpret using “image enhancement”
 Imagery can be ordered at different levels of correction and
enhancement
 Rectification – remove distortion (platform, sensor, earth,
atmosphere)
Geometric and radiometric correction
 As any image involves radiometric errors as well as geometric errors,
these errors should be corrected
 Radiometric correction
 Removal of sensor or atmospheric 'noise', to more accurately
represent ground conditions - improve image 'fidelity’:
 Improving the accuracy of surface spectral reflectance, emittance, or
back-scattered
 Types of radiometric correction
 Detector or sensor error
 Atmospheric error
 Topographic error
• Radiometric correction is used to modify DN values
to account for noise, i.e.  contributions to the DN
that are a result of…
a. the intervening atmosphere
b. the sun-sensor geometry
c. the sensor itself – errors and gaps
 Sensor problems show as striping or missing lines
of data:
Geometric correction
 Images are rarely provided in the correct projection, coordinate system
and free of geometric distortions.
 Distortion in the spatial domain is usually occurring during remote
sensing data acquisition.

 Geometric correction is organizing the spatial placement of


measurements.
 Conversion of data to ground coordinates by removal of distortions
from sensor geometry
 Geometric distortions occur due to Earth rotation during acquisition and
due to Earth curvature.
 Objective: to make the geometric representation of the imagery as
close as possible to the real world
Geometric corrections are intended to compensate
for these distortions due to:
 perspective of the sensor optics;
 motion of the scanning system;
 motion of the platform;
 platform altitude, attitude, and velocity;
 terrain relief;
 curvature and rotation of the Earth.
Normally implemented as a two-step procedure.
 First, those distortions are considered that are systematic.
 Secondly, distortions that are random, or unpredictable
 Systematic errors are predictable in nature
 Can be corrected using data from the orbit of the platform &
 Knowledge of internal sensor distortion.
 Random errors are corrected based on
 Geometric registration of remote sensing imagery to a
known ground coordinate system (e.g., topographic map)
 image-to-map registration.
 image-to-image registration.
Image Enhancement
 Improving the interpretability of the image by increasing
apparent contrast among various features.
 Techniques for increasing the visual distinctions between
features in a scene.
 Image enhancement refers to data processing
that aims to increase the overall visual quality of
an image or to enhance the visibility and
interpretability of certain features of interest in it.
 Normally, image enhancement involves techniques for
increasing the visual distinctions between features in a
scene.
Most enhancement techniques may be categorized as
either
Point or local operations.
1. Point operations: modify the brightness value of
each pixel in an image data set independently.
2. Local operations: modify the value of each pixel
based on neighboring brightness values
Image enhancement is the process of making images more
useful
The reasons for doing this include:
– Highlighting interesting detail in images
– Removing noise from images
– Making images more visually appealing
Contrast stretching
 Contrast enhancement involves changing the original values
so that more of the available range is used
 Refers to the difference in luminance or grey level values in an
image
 The result is a sharper, more pleasing picture
 Important to understand the concept of an image histogram.
 A histogram is a graphical representation of the brightness
values that comprise an image
 In practice a perfectly uniform histogram cannot be achieved
for digital image data
 The method of producing a uniform histogram is known generally
as histogram equalization
 shows us the distribution of grey levels in the image
Histogram
• Image Histogram: For every digital image the pixel
value represents the magnitude of an observed
characteristic such as brightness level. An image
histogram is a graphical representation of the
brightness values that comprise an image. The
brightness values (i.e. 0-255) are displayed along
the x-axis of the graph. The frequency of
occurrence of each of these values in the image is
shown on the y-axis.
The histogram of an image shows us the distribution of
grey levels in the image
Massively useful in image processing, especially in
segmentation
Frequencies

Grey Levels
Image classification
 Digital image classification is the process of assigning
pixels to classes (categories)
 Usually each pixel is treated as an individual unit
composed of values in several spectral bands.
 Each pixel has digital values
 Compare the values to pixels of known composition and
assign them accordingly
 Each class (in theory) is homogenous
 Classifier
 The term classifier refers loosely to a computer program
that implements a specific procedure for image
classification.
Types of classification
 Supervised classifications
 Unsupervised classification
RS images are usually composed of several relatively
uniform
spectral classes.
Thus, USC is the identification, labeling and mapping of
such classes based on a computer algorithm.
 It can be defined as the identification of natural
groups, or structures, within multispectral data.
 Advantage of unsupervised classifications
• No extensive prior knowledge of the region is required.
- The opportunity for human error is
minimized.
- Unique classes are recognized as
distinct units.
• Advantages USC
Requires no prior knowledge of the region
Human error is minimized
Unique classes are recognized as distinct units
Disadvantages USC
Classes do not necessarily match informational categories
of interest
Limited control of classes and identities
Spectral properties of classes can change with time
Supervised classification
Supervised classification requires the analyst to identify
known areas
Start with knowledge of class types.
Training samples are created for each class.
Ground truth used to verify the training samples.
• Advantages
Analyst has control over the selected classes tailored to
the purpose.
Has specific classes of known identity.
Does not have to match spectral categories on the final
map with informational categories of interest.
Can detect serious errors in classification if training areas
are misclassified.
 Disadvantages
Analyst imposes a classification (may not be
natural)
Training data are usually tied to informational
categories and not spectral properties
Training data selected may not be representative
Selection of training data may be time consuming
and expensive
3. Point based Classification
 It is the simplest and more economical
Considers each pixel individually
Can’t describe relation to neighboring pixels
THANK YOU
Chapter Four
 Principles of thermal remote sensing
 Thermal remote sensing is the branch of remote sensing that
deals with
 The acquisition, processing
 Interpretation of data acquired in the thermal infrared (TIR)
region of the electromagnetic (EM) spectrum.
 In thermal remote sensing we measure the radiations
'emitted' from the surface of the target,
 Emitted Energy
 The sensor detects solar radiation that has been absorbed by the
earth, then reemitted as thermal infrared radiation.
 Optical remote sensing (visible and near-IR)
– Examine abilities of objects to reflect solar
radiation
 Emissive remote sensing (mid-IR and microwave)
– Examine abilities of objects to absorb shortwave
visible and near-IR radiation and then to emit this
energy at longer wavelengths
Thermal IR Remote Sensing
 Thermal infrared radiation refers to electromagnetic
waves with a wavelength of between 3 and 20
micrometers.
 TRS Is based on measuring of electromagnetic
radiation in infrared region of spectrum
 Most remote sensing applications make use of the 3 to
5 and 8 to 14 micrometer range (due to absorption
bands).
The main difference between thermal infrared and
near infrared is that thermal infrared is emitted
energy, whereas the near infrared is reflected energy,
similar to visible light.
Surface temperature is the main factor that determine the amount of
energy that is radiated and measured in thermal wave length
In practice, thermal data prove to be complementary to other remote
sensing data.
In thermal remote sensing, radiations emitted by ground objects are
measured for temperature estimation.
These measurements give the radiant temperature of a body which
depends on two factors - kinetic temperature and
emissivity.
Principles of Emitted Radiation
 Emissivity is the emitting ability of a real material compared to that
of a black body
 The amount of radiation emitted by an object is determined primarily
by its:
– Internal temperature; and
– Emissivity
• Kinetic temperature is the surface temperature of a
body/ground and is a measure of the amount of heat energy
contained in it
• Black body is a theoretical object that absorbs and then
emits all incident energy at all wavelengths. This means that
the emissivity of such an object is by definition 1.
• Needless to say, such an object is only imaginary and no
natural substance is an ideal black body.

Factors Affecting the Kinetic Temperature can be


categorized in two broad groups - heat energy budget and
thermal properties of the materials

You might also like