Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

J Med Syst (2018) 42:44

https://doi.org/10.1007/s10916-017-0880-7

IMAGE & SIGNAL PROCESSING

Content Based Image Retrieval by Using Color Descriptor and Discrete


Wavelet Transform
Rehan Ashraf1 · Mudassar Ahmed1 · Sohail Jabbar1 · Shehzad Khalid2 · Awais Ahmad3 · Sadia Din4 ·
Gwangil Jeon5

Received: 30 August 2017 / Accepted: 13 December 2017


© Springer Science+Business Media, LLC, part of Springer Nature 2018

Abstract
Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of
similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides
a framework for image search and low-level visual features are commonly used to retrieve the images from the image
database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually
appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role
in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques
are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an
image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In
this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario,
which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature
extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among
the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a
dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with
canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform
increase the performance of image retrieval framework for content based search. The execution of different wavelets is
additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm
is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks
(ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed
by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the
predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research
in term of average precision and recall values.

Keywords CBIR · Discrete wavelet transform · YCbCr · Canny descriptor · Histogram · Artificial neural network ·
Similarity

Introduction challenging research area [1–3]. With the development in


technology and usage of internet, the volume of shared
The different forms of multimedia contents are growing multimedia contents specially digital images has grown
and Content-Based Image Retrieval (CBIR) has become a exponentially [4–6]. The retrieval of similar multimedia
contents from a large repository is an open research
problem. Content Based Image Retrieval (CBIR) as a
This article is part of the Topical Collection on Image & Signal dynamic and vast field promotion in image processing and
Processing computer vision [7–9]. For flexible computing hardware
 Rehan Ashraf development, digital data acquisition and information have
[email protected] become a standard procedure that has been widespread in
recent years. It extracts a variety of features from the image,
Extended author information available on the last page of the article. separating measurements such as the metric distance using
44 Page 2 of 12 J Med Syst (2018) 42:44

the data set is intended to position the most intuitive similar from across the region. In this regards, image segmentation
image in a given each query image. CBIR framework is key role in region based and each component of segment
for capturing images has visual image content, like color based on feature homogeneity.
[10, 11], image appearance like as texture [12–14] and The color feature of capturing digital images depends on
its shape [15–17], and the images which is based on the the theory of understanding the essential elements of color
similar results as a virtual image of the search results and their representation, illustration and color properties
visually similar look and efficient result. It has become in digital images. The amount of color information
a common strategy for adaptation devices for retrieval between completely color spaces in different manner and
and computerized data progress information with a general the quantification of the color information provided by
recent plate, it is for example, collecting information by the feature recovery method is the primary determinant.
using the measure of the approximate image component The color is usually represented by histogram of color,
separation of the plan at image query has been released from correlation and color correlated vector moments in a
sorting and then to find the most natural equivalent image. particular color space [31, 32]. Raghupathi et al. extracted
The usual process is to generate image representations the color image by using the color histogram and gabor
by considering the content of the image texture, color of transform [33]. In the work of [34], presented CBIR system
image, or shape of the image and categorize them through in view of the color, texture, shape and combination of these
techniques of machine learning. These packets are returned unique features got by dividing the image into portions.
every time an encounter and specifies the semantic class. As local descriptors the texture features and color features
Taking feature is the most important step in computer vision computed on each segment. The analysis of texture and
and image processing. This feature can be divided into two color features are examined by the use of two level grid
main groups: global features of the image and local features systems. Gradient Vector Flow is used to present Shape
in region based features. The feature representation is taken features.
considered as the entire image as a single unit to capture In this paper, we use the color edge detection and discrete
the general image features. Color convert into histograms wavelet approaches to prepare eigenvector information from
[18, 19], texture [20, 21], color layout [22, 23] etc. are used the images. The color space YCbCr and RGB are used
to extract the global features and image representation. To to pick up the color features. The RGB plane directly
obtain the local feature, segmented region is key step and represents the image of the bitmap to represent the color
interest point is detected [24, 25]. image. However, in the medical studies it is known that the
In this regard, features can be extracted by using the eye of human have different color and brightness sensitivity.
salient features like as Scale Invariant Feature Transform The color of eye changes beyond the brightness changes.
(SIFT) and Speed Up Robust Features (SURF) [7, 26]. The human eye is more penetrating to change the brightness
In CBIR, image signature and extraction feature, play an is more as compared to eye color. Therefore, we employed
important role in building good image retrieval. Image the transformation of RGB to YCbCr. In YCbCr, luminance
Query and image repository are identified as a set of feature component is presented by Y chrominance parts in a color
vectors, ordering related results, that is, the development image is donated by Cb and Cr. The unpredictability among
of the remote or semantic organization by using machine YCbCr and RGB is that the YCbCr addresses brightness
learning methods [27, 28]. Color, texture, and shapes, or by color and signals by 2-color difference while the RGB
developing a combination of any of them mathematically, depends as red color, green color and blue color. The YCbCr
done. These combination of color, texture, shape are color space was selected to solve the color variation problem
referred into low level features and defined as feature due to Y luminance component is independent of the color
with a valid visual element, it captures the surface of the [28]. To extract the edge feature, Canny edge detector is
image by rerouting the surface of the part. Color features used and able to manage up with noise in a color image. This
are widely used in CBIR and considered more efficient edge detector guarantees just a single reaction to a solitary
in three-dimensional regions than in a dimensioned gray edge and it gives shape structure with ideal at any scale. At
scale image. Texture is a very powerful visual descriptor whatever point the query image arises to compute the color
used to capture repeated patterns of image surfaces. In this with edge features are combined. The viewfinder ensures
reality, human nature and the development of visible objects that exclusive features respond to the opponent and provide
are called the basic hint of shape. Shape feature, shapes the best shape in any size. The part of Canny’s part, as well
are used in many applications for capturing images [21, as, may have a round-the-clock image. Every time a query
29, 30]. The shape of the capture procurement feature is image appears, the combined side of the color and the edge
divided into contours and regional based methods. Sorting of the highlighted vectors is confirmed. If there is small
the boundary shape based boundary comprises contour and distance in query image vector feature and repository image
extracted features, region-based approach extracts features is lesser, the correlated image from the database is selected
J Med Syst (2018) 42:44 Page 3 of 12 44

to match with the image passed in query. The mechanism is different from the previous as it is based on the color and
of search is usually built on parity rather than the exact haar transform features and images are retrieved by applying
match, the search results are placed in identical index, distance measures instead of a using a classification-based
Manhattan distance is used for similarity.For image retrieval approach (training-testing model). The basic requirement in
purposes, artificial neural networks (ANN) are applied and any image retrieval process is to sort the images with a close
the performance of the system and achievement is evaluated similarity in term of visually appearance. The main idea of
on standard data sets used in the domain of CBIR. CBIR is that it can search similar images to an image passed
This paper is organized according to the following. as query from a repository got by using distance metrics and
Section 5 presents literature review, Section 5 focuses similarity matching techniques.
on the proposed method. In the Section 5 performance
evaluation is presented. Finally, the conclusion is presented
in Section 5. Proposed method

The color, shape and texture are the examples of low-level


Literature review image features. CBIR system extracts the features from an
image and these features are obtained by using different
In the last few years, significant research has been informed feature extraction techniques [42, 43]. Texture, color, shape
for CBIR by analyzing different aspects such as feature and salient points are the visual attributes of an image and
fusion, spatial information and semantic interpretations [9]. properties of every image from repository and stores it in
The color is considered as one of the most extensive a separate database called database feature or repository
vision characteristics and is related to image foreground, features. System characterized by automatic extraction of
background and objects. It is reported robust to image the query image in the same pattern as per database image,
translation and rotations [1, 2, 35]. Tian et al. [36], proposed find the image in the feature vector database matches
a feature for image representation, that is based on Edge eigenvectors of query image, sorting the best similar objects
Orientation Difference Histogram (EODH). The proposed according to their similarities [28, 44, 45]. Therefore, above
research is reported scale and rotation invariant and utilizes all two processes, one of the extraction process features, the
the integration of Color Scale Invariant Feature Transform second is the matching feature of the process. In Fig. 1, a
(Color-SIFT) with EODH by using a weighted codebook typical CBIR system are shown that features are the most
distribution. According to Walia et al. [37], a combination important steps in image processing and has been studied
of color and texture is an effective way to enhance the extensively in the field of computer vision. The feature
performance of image matching and retrieval process. The combination that is also known as feature fusion is applied
color is extracted by applying two different features to make in CBIR to increase the performance, as single feature is
the proposed research more reliable. According to [38], not robust to the transformations that are in the image
there are variations in images in term of scale and view datasets. In this paper, to propose a method for CBIR that
point. The quantized RGB color space is used for color is based on a color features. The color is considered as
representation while the texture is preserved with the help of one of the most extensive feature and is related to image
patterns and the extracted color and texture is used in fusion
to increase the performance of image retrieval. According to
[7], the late fusion of SIFT and Speeded-Up Robust Features
(SURF) enhances the performance of CBIR and performs
better than the use of SIFT and SURF separately. The recent
advances in image retrieval diverted the focus of research
towards the use of binary descriptors as they are reported
computationally efficient. According to [7], a combination
of Local Binary Pattern (LBP) and SIFT enhances the
performance of image retrieval. The combination of both
features is selected as SIFT performs poor incase of noisy
background and ambiguous objects while LBP is reported
robust in this case. According to [39] the early fusion with
weighted average of clustering is robust for image retrieval
as it maintains a balance in feature vector representation.
The research models presented in [7, 9, 40, 41] are based on
image classification framework while the proposed research Fig. 1 Typical CBIR system
44 Page 4 of 12 J Med Syst (2018) 42:44

foreground, background and objects. It is reported robust to YCbCr and then applying the canny detector on Y
image translation and rotations and variety of color are used components, the feature edge of the image is obtained
in different applications [8, 46, 47]. Color space identifies a and the edge information is taken. The matrix edge taken
particular combination of the color model and the mapping from the Y-luminance matrix and the invariant Cb and Cr
function. matrices combine to create a perfect RGB image. The RGB
In this paper, the feature of color got on the detection of image contains edge and histograms information on each
image edge based in YCbCr color space. In YCbCr, Y is individual RGB image channel. After the histogram on the
known as luma component and other factor Cb and Cr are side, the discrete wavelet transform is used to represent
the blue-difference and red-difference chroma components. the feature vector. The proposed features are applied on all
Luma components is denoted as luminance, meaning that images present in the dataset, and their semantic classes
light intensity is nonlinearly encoded based on gamma are determined through ground truth training with Artificial
corrected RGB primaries. To convert the color image into Neural Networks and the finer neighborhood of every

Fig. 2 Proposed method


J Med Syst (2018) 42:44 Page 5 of 12 44

image. We generated inverted index over the semantic sets,


which guarantees the fast image retrieval after determining
the semantic class of query image. The proposed method is
shown in Fig. 2.
The main steps of color feature extraction described as
below:
1. The M ∗ N input RGB image I will be changed into
YCbCr color space.
2. After the conversion of image, separate the YCbCr part
then apply the Canny edge detector to the part of the Fig. 4 Discrete wavelet decomposition
image Y.
3. In the next step, combined the edge taken in the image passed as query and database images. If the distance
preceding step with unobtrusive Cb and Cr. between the image passed as query vector feature and the
4. After 3rd step, the synthesized image is converted to a images from database is small enough, the corresponding
single RGB image. image in the database will be selected to match the image
5. After the conversion, separate each component of RGB passed as query. The search is usually based on parity rather
into individual R, G, and B components and then than exact match, and the analogy index is used to index the
compute the histogram of each parts in RGB image. 256 distance of Manhattan, so search results are sorted by the
bins are obtained from HR, HG, and HB. same index.
6. To enhance the feature performance discrete wavelet
n
transform is applied on every histogram obtained from D= |Fq − Fr | (1)
last step. To apply the 2 leveled DWT of HR and 3 i=1
leveled on HG and HB. After applying the DWT, we
Feature vector of image passed as query is donated by Fq
have got 128 bins as. 64 bins got from HR is 64, bins
and feature vector present in the database as Fr [9].
from HG is 32, and bins from HB is 32.
7. In repository, to calculate feature vector for every
Discrete wavelet transform
image.
In the Fig. 3, the first and the last image is presented that Discrete wavelet Transforms (DWT) is used to modify
is obtained after color edge detection. In the experience that images from the spatial domain to the domain of frequency
the dividing portion of image passed as query and images [48, 49]. Small wavelet transformation represents the
got from database is very less, the correlate images in the coverage of the home function, known as the main
repository is selected related to query image. In this regard, wavelength functions. In the small wavelet transform, by
distance factor is involved between the feature vector of passing the low pass filter and a high pass filter to retrieve

Fig. 3 Color feature


44 Page 6 of 12 J Med Syst (2018) 42:44

wavelet transform can be linearly calculated on time, it


can be very efficient algorithm. The signal to a set of
base functions and wavelet length functions is got by
decomposition done by DWT. Two-dimensional wavelet
form transformation is a multi-resolution method using
recursive filtering and also sub sampling. In discrete wavelet
transforms represent the levels and decompose it into
different levels, it is decomposed image on four frequencies
subband LL(low-low), LH(low-high), HL(high-low) and
HH(high-high) (where, L is a low frequency, H is a high
frequency) as shown in Fig. 4. In this paper, to calculate the
characteristics of the images using the Haar wavelet. This is
the fastest of the calculations, as it is found to be really good
[50, 51]. The Haar wavelet allow us to accelerate the wavelet
calculation process for thousands of sliding windows of
different sizes of the image.
Fig. 5 ANN simulation

Content based image retrieval using ANN


information from different signal levels. The wavelet offers
a good energy compression and multi-resolution capability. When the image is represented in an image stored as a
The wavelet is stable offset color intensity, you can get low-level feature, the semantic class can be determined.
the texture and shape of the information efficiently. Small To calculate the actual semantic class, creating sub-images

Fig. 6 Regression graph


J Med Syst (2018) 42:44 Page 7 of 12 44

and stores known class “M”, and all classes contain images with each other. Therefore, the class association can be
R ≥ 2 images. In our execution, the value of “R” is assigned determined by:
to 30. This means that 30 images of all semantic categories
n
from the repository ground image of all knowledge base Cf (X) = argmax( |Cm (X)|) (4)
for training and development. Artificial neural networks i=1
with specific class-related parameters are trained in the
Once the semantic association of all images present in
repository. The class specific training set can be defined as:
the image repository is determined, we store the semantic
t = p n and association rules can be defined as the training-
association values in a file that serves for us as the
specific classes, as shown in Figs. 5 and 6, and the average
semantic association database. Therefore, after determining
value for the target network development to reduce the
the semantic class of the query image through trained neural
squared error between a real which correlates p ∪ n for
networks, we compute the distance of the query image with
all classes (OAA), where p represents the R image from a
all images of the same semantic class by taking into an
particular class and n are all other images in the training
account the values of the semantic association database, and
library. Once the training is completed, all semantic image
generate the output on the basis of the feature similarities.
categories image repository existence is determined by the
basic decision-making functions and related rules. Because
parts of the object present in the image, the image tends to
show a lot of classes associated with some other, sometimes
Performance evaluation
they may be a beach image associated with the image.
To elaborate the retrieval capabilities of the proposed
Therefore, we need a mechanism to reduce this association.
method numerous experiments are performed on image
This is why the class termination process, top level “K”
datasets. For implementation purposes, we have used
neighbors are also included in the semantic association with
Matlab 2015b in the Windows 7 environment using a core
the majority (MVR) process [52, 53].
i3 machine by Dell. The proposed features are applied on

n all images present in the dataset, and their semantic classes
k−1
Cm (X) = sgn |Ci (X) − | (2) are determined through ground truth training with Artificial
2 Neural Networks and the finer neighborhood of every
i=1
image. In this paper, we have indiscriminately 300 selected
Where Ci (X) input image is the class association and its images from image repository for computer simulation
top neighbors is defines as [9, 54]: and these images further use for the query image. To use
image data sets having collated images having semantic
Ci (X) = yf l (3) concepts, and on the basis of their labels. As described
above, it is possible to automatically identify the semantic
where l = 1, 2, 3, n is the total number of neural networks, relevance of the image data of the image based on the
and returns the communication element and belonging to labels of the semantic concept. In proposed work, reported
a structure of a neural network in a certain class. Number the average result after running the five time experiments
and inevitable greatest number of papers that are compatible from each image category. In our experiment, after defining

Fig. 7 Results obtained by


passing image of Dinosaur as
query
44 Page 8 of 12 J Med Syst (2018) 42:44

Fig. 8 Results obtained by


passing image of flowers as
query

the semantic query image category, the relevant image was Recall = Tq /Td (6)
returned and we suggested an indexing mechanism for
recovery, so that Google follows the text search document Where Tq is a majority of the associated images related to
text. Giving to the recommend methodology, for our the query image having similarity, Tr shows the number
train the neural network for each image container present of system which is got in the result of query image that
application, specify the semantic class itself, and the class is retrieved, and Td in that record for the overall quantity
related information is stored in the relevant database by the of related images can be represented. The precision can
relevant database. The effectiveness of this method is that it be obtained by only selected images which is similar to
is only necessary to determine predefined information about the given image against all other obtained image. Although
semantic image query categories and semantic clusters Recall value also acknowledged as true positive rate. Even
after defining semantic information. Generally, depending the refinement and verification system to determine the
on precision and recall, the following formula is used to ability to simply extract the image associated with the
determine the accuracy of the class composition. Overall, query image capture center of the image, but the recall
to check the accuracy of class association is determined by as well as known sensitivity, otherwise the true-positive
using the formula of precision and recall [9]: rate, as well as their Classroom practice, calculator System-
related competencies. To narrow the results, use the first 20
P recision = Tq /Tr (5) captured image of the query image to check the accuracy

Fig. 9 Precision Graph between


proposed technique and other
methods
J Med Syst (2018) 42:44 Page 9 of 12 44

Table 1 Results of precision


obtained by Proposed system Class Proposed Method lin [55] raghupathi [33] pujari [34] rao [44]
and existing system on top 20
results African 0.75 0.68 0.75 0.54 0.56
Beach 0.6 0.54 0.60 0.38 0.53
Building 0.5 0.56 0.43 0.30 0.61
Buses 0.9 0.89 0.69 0.64 0.89
Dinosaurs 1 0.99 1 0.96 0.98
Flowers 0.85 0.89 0.93 0.68 0.89
Elephants 0.8 0.66 0.72 0.62 0.57
Mountains 0.55 0.52 0.36 0.45 0.51
Horses 0.7 0.80 0.91 0.75 0.78
Food 0.7 0.73 0.65 0.53 0.69
Mean 0.735 0.726 0.704 0.585 0.701

and the recall values. After running the program five times, the performance comparison of the system in terms of recall
we report the average results above. rates with the same systems.
To improve the performance of random selection system From Corel data set, the precision and recall results
described, we obtained three graphs from the 20 images of described, and we can observe that integrating curvelet
the above categories in the Correl image set. The results transform with enhanced dominant colors extraction and
in Figs. 7 and 8 show that we will retrieve the process of texture (ICTEDCT) has second highest rates in terms
retrieving these query images through the present invention of precision and recall. Therefore, we have reported the
the accuracy of the search results. What is important here performance comparison on Coil dataset on different
is that these are the results achieved without external user retrieval rates against ICTEDCT [14]. For this experiment
generation management, such as the most relevant feedback five images are selected from each image category and
based on the CBIR process. then performance of both systems is compared against
Figure 9 shows a comparison of the accuracy of the each category. From the results elaborated in Fig. 11, it
proposed system’s intelligence class with other systems. can be clearly observed that proposed method is giving
The outcomes of proposed method is better performed higher recall and precision rates as compare to ICTEDCT
than all other state of art systems in terms of the average [14]. Hence from the results of proposed method on Coil
accuracy obtained, as shown in Table 1. Table 1 explains the and Corel datasets, we can say that proposed method is
class wise comparison of the proposed system with other much more precise and effective as compare to other CBIR
comparative systems in terms of precision. Figure 10 reports systems.

Fig. 10 Recall Graph between


proposed technique and other
methods
44 Page 10 of 12 J Med Syst (2018) 42:44

Fig. 11 Comparison of precision


and recall obtained by proposed
method with ICTEDCT

Conclusion and future direction Compliance with Ethical Standards All procedures performed in
studies involving human participants were in accordance with
the ethical standards of the institutional and/or national research
With several applications, content based image retrieval
committee and with the 1964 Helsinki declaration and its later
has gained a lot of research attention. In this paper, we amendments or comparable ethical standards.
introduced a mechanism for automatic image retrieval. Our
focus is on finding the way to ensure the images are
viewing for relatively accurate images. We offer novel Conflict of interests Authors Rehan Ashraf, Mudassar Ahmed, Sohail
Jabbar, Shehzad Khalid, Awais Ahmad, Sadia Din, and Gwangil Jeon
content basing on image capture method, that depends on declare that they have no conflict of interest.
color features. The proposed research is using the color,
analysis of histogram and discrete cosine transform as
they are robust and require less computation power. In the References
proposed method, the edge is derived from the YCbCr
matrix using the canny edge detection method and the RGB 1. Li, X., Uricchio, T., Ballan, L., Bertini, M., Snoek, C. G., and
histogram is calculated as a global statistical illustration Bimbo, A. D., Socializing the semantic gap: a comparative survey
of color distribution representing the image. To further on image tag assignment, refinement, and retrieval. ACM Comput.
enhance the image representation capabilities, color features Surv. (CSUR) 49(1):14, 2016.
2. Alzubi, A., Amira, A., and Ramzan, N., Semantic content-based
are also incorporated with the histogram and applied Haar image retrieval: a comprehensive study. J. Vis. Commun. Image
wavelet transform to effectively reduce computational steps Represent. 32:20–54, 2015.
and help improve search speed. The Manhattan distance 3. Liao, X., Yin, J., Guo, S., Li, X., and Sangaiah, A. K.,
is applied to retrieve the similar images that are placed Medical jpeg image steganography based on preserving inter-
block dependencies. Computers & Electrical Engineering, 2017.
in the image dataset. The image retrieval performance of 4. Datta, R., Joshi, D., Li, J., and Wang, J. Z., Image retrieval: ideas,
the proposed research is compared with existing state- influences, and trends of the new age. ACM Comput. Surv. (CSUR)
of-the art in term of precision, recall, feature extraction 40(2):5, 2008.
and time required to retrieve images. Semantic association 5. Shleymovich, M., Medvedev, M., and Lyasheva, S. A., Image
analysis in unmanned aerial vehicle on-board system for
is performed through Artificial Neural Networks, and an objects detection and recognition with the help of energy
inverted index mechanism is used to return the images characteristics based on wavelet transform. In: XIV International
against queries to assure the fast retrieval. The Comparisons Scientific and Technical Conference on Optical Technologies
results with other CBIR standard systems show that the in Telecommunications. International Society for Optics and
Photonics, pp. 1 034 210–1 034 210, 2017.
proposed system performs better than the all existing
6. Singh, H., and Agrawal, D., A meta-analysis on content based
systems in terms of average precision and recall values. image retrieval system. In: International conference on emerging
In future, we intend to evaluate the proposed research technological trends (ICETT), pp. 1–6: IEEE, 2016.
by integrating texture features with a classification based 7. Yuan, X., Yu, J., Qin, Z., and Wan, T., A sift-lbp image
retrieval model based on bag of features. In: IEEE International
framework for image retrieval and intend to evaluate the
Conference on Image Processing, 2011.
proposed research by integrating with salient points with a 8. Acharya, T., and Ray, A. K, Image processing: principles and
classification framework for medical image retrieval. applications. Wiley, 2005.
J Med Syst (2018) 42:44 Page 11 of 12 44

9. Ashraf, R., Bajwa, K. B., and Mahmood, T., Content-based image 28. Fakheri, M., Sedghi, T., Shayesteh, M. G., and Amirani, M.
retrieval by exploring bandletized regions through support vector C., Framework for image retrieval using machine learning and
machines. J. Inf. Sci. Eng. 32(2):245–269, 2016. statistical similarity matching techniques. IET Image Process.
10. Anandh, A., Mala, K., and Suganya, S., Content based image 7(1):1–11, 2013.
retrieval system based on semantic information using color, tex- 29. Khalid, S., Sajjad, S., Jabbar, S., and Chang, H., Accurate and
ture and shape features. In: International conference on computing efficient shape matching approach using vocabularies of multi-
technologies and intelligent data engineering (ICCTIDE), pp. 1– feature space representations. J. Real-Time Image Proc. 13(3):449–
8: IEEE, 2016. 465, 2017.
11. Zhao, Z., Tian, Q., Sun, H., Jin, X., and Guo, J., Content based 30. Sanu, S. G., and Tamase, P. S., Satellite image mining using
image retrieval scheme using color, texture and shape features. Int. content based image retrieval. Int. J. Eng. Sci. 13928, 2017.
J. Signal Processing, Image Processing and Pattern Recognition 31. Tsai, H.-H., Chang, B.-M., Lo, P.-S., and Peng, J.-Y., On the
9(1):203–212, 2016. design of a color image retrieval method based on combined color
12. Kumar, T. S., Rajinikanth, T., and Reddy, B. E., “Image descriptors and features. In: 2016 IEEE international conference
information retrieval based on edge responses, shape and on computer communication and the internet (ICCCI), pp. 392–
texture features using datamining techniques,” Global Journal of 395: IEEE, 2016.
Computer Science and Technology, 2016. 32. Upadhyaya, N., and Dixit, M., A novel approach for cbir using
13. Suresh, M., and Naik, B. M., Content based image retrieval using color strings with multi-fusion feature method. Digital Image
texture structure histogram and texture features. Int. J. Comput. Process. 8(5):137–145, 2016.
Intell. Res. 13(9):2237–2245, 2017. 33. Raghupathi, G., Anand, R., and Dewal, M., Color and texture fea-
14. Youssef, S. M., Ictedct-cbir: integrating curvelet transform with tures for content based image retrieval. In: Second international con-
enhanced dominant colors extraction and texture analysis for ference on multimedia and content based image retrieval, 2010.
efficient content-based image retrieval. Comput. Electr. Eng. 34. Pujari, J., and Hiremath, P., Content based image retrieval based on
38(5):1358–1376, 2012. color texture and shape features using image and its complement.
15. Dhara, A. K., Mukhopadhyay, S., Dutta, A., Garg, M., and Int. J. Comput. Sci. Secur. 1(4):25–35, 2007.
Khandelwal, N., Content-based image retrieval system for 35. Bernardi, R., Cakici, R., Elliott, D., Erdem, A., Erdem, E., Ikizler-
pulmonary nodules: Assisting radiologists in self-learning and Cinbis, N., Keller, F., Muscat, A., and Plank, B., Automatic
diagnosis of lung cancer. J. Digit. Imaging 30(1):63–77, description generation from images: a survey of models, datasets,
2017. and evaluation measures. J. Artif. Intell. Res. (JAIR) 55:409–442,
16. Patil, R. S., and Agrawal, A. J., Content-based image retrieval 2016.
systems: a survey. Advances in Computational Sciences and 36. Tian, X., Jiao, L., Liu, X., and Zhang, X., Feature integration
Technology 10(9):2773–2788, 2017. of eodh and color-sift: Application to image retrieval based on
17. Khalid, S., Sabir, B., Jabbar, S., and Chilamkurti, N., Precise shape codebook. Signal Process. Image Commun. 29(4):530–545, 2014.
matching of large shape datasets using hybrid approach. Journal 37. Walia, E., and Pal, A., Fusion framework for effective color image
of Parallel and Distributed Computing, 2017. retrieval. J. Vis. Commun. Image Represent. 25(6):1335–1348,
18. Plataniotis, K. N., and Venetsanopoulos, A. N., Color image 2014.
processing and applications. Springer, 2000. 38. Dubey, S. R., Singh, S. K., and Singh, R. K., Rotation and scale
19. Liu, G.-H., and Yang, J.-Y., Content-based image retrieval using invariant hybrid image descriptor and retrieval. Comput. Electr.
color difference histogram. Pattern Recogn. 46(1):188–198, Eng. 46:288–302, 2015.
2013. 39. Yu, J., Qin, Z., Wan, T., and Zhang, X., Feature integration analy-
20. Hejazi, M. R., and Ho, Y.-S., An efficient approach to texture- sis of bag-of-features model for image retrieval. Neurocomputing
based image retrieval. Int. J. Imaging Syst. Technol. 17(5):295– 120:355–364, 2013.
302, 2007. 40. Farhan, M., Aslam, M., Jabbar, S., Khalid, S., and Kim, M., Real-
21. Kekre, D. H., Thepade, S. D., Mukherjee, P., Wadhwa, S., time imaging-based assessment model for improving teaching
Kakaiya, M., and Singh, S., Image retrieval with shape features performance and student experience in e-learning. J. Real-Time
extracted using gradient operators and slope magnitude technique Image Proc., 2017.
with btc. Int. J. Comput. Appl. 6(8), 2010. 41. Ashraf, R., Mahmood, T., Irtaza, A., and Bajwa, K., A novel
22. Singha, M., and Hemachandran, K., Content based image retrieval approach for the gender classification through trained neural
using color and texture. Signal Image Process. Int. J. (SIPIJ) networks. J. Basic Appl. Sci. Res 4:136–144, 2014.
3(1):39–57, 2012. 42. Liang, W., Tang, M., Jing, L., Sangaiah, A. K., and Huang,
23. Ashraf, R., Bashir, K., Irtaza, A., and Mahmood, M. T., Content Y., Sirse: a secure identity recognition scheme based on
based image retrieval using embedded neural networks with electroencephalogram data with multi-factor feature. Computers
bandletized regions. Entropy 17(6):3552–3580, 2015. & Electrical Engineering, 2017.
24. Yang, M., Kpalma, K., Ronsin, J., et al., A survey of shape feature 43. Samuel, O. W., Zhou, H., Li, X., Wang, H., Zhang, H., Sangaiah,
extraction techniques. Pattern Recogn. 43–90, 2008. A. K., and Li, G., Pattern recognition of electromyography signals
25. Wang, J. Z., Li, J., and Wiederhold, G., Simplicity: Semantics- based on novel time domain features for amputees’ limb motion
sensitive integrated matching for picture libraries. IEEE Trans. classification. Computers & Electrical Engineering, 2017.
Pattern Anal. Mach. Intell. 23(9):947–963, 2001. 44. Rao, M. B., Rao, B. P., and Govardhan, A., Ctdcirs: content
26. Velmurugan, K., and Baboo, L. D. S. S., Content-based image based image retrieval system based on dominant color and texture
retrieval using surf and colour moments. Global J. Comput. Sci. features. Int. J. Comput. Appl. 18(6):40–46, 2011.
Technol. 11(10), 2011. 45. Zhang, R., Shen, J., Wei, F., Li, X., and Sangaiah, A. K., Medical
27. Chanda, S., and Chandra, P., A novel approach for content based image classification based on multi-scale non-negative sparse
image retrieval in context of supervised learning and regression coding. Artificial Intelligence in Medicine, 2017.
analysis. In: 2016 International Conference on Computer, 46. Wang, X.-Y., Zhang, B.-B., and Yang, H.-Y., Content-based image
Electrical & Communication Engineering (ICCECE), pp. 1–8: retrieval by integrating color and texture features. Multimedia
IEEE, 2016. tools and applications 68(3):545–569, 2014.
44 Page 12 of 12 J Med Syst (2018) 42:44

47. Müller, H., Michoux, N., Bandon, D., and Geissbuhler, A., 51. Sarker, I. H., and Iqbal, S., Content-based image retrieval using
A review of content-based image retrieval systems in medical haar wavelet transform and color moment. SmartCR 3(3):155–
applications—clinical benefits and future directions. Int. J. Med. 165, 2013.
Inform. 73(1):1–23, 2004. 52. Tao, D., Tang, X., Li, X., and Wu, X., Asymmetric bagging and
48. Agarwal, S., Verma, A., and Dixit, N., Content based image random subspace for support vector machines-based relevance
retrieval using color edge detection and discrete wavelet trans- feedback in image retrieval. IEEE Trans. Pattern Anal. Mach.
form. In: 2014 International Conference on Issues and Challenges Intell. 28(7):1088–1099, 2006.
in Intelligent Computing Techniques (ICICT), pp. 368–372: IEEE, 53. Schapire, R. E., The boosting approach to machine learning: an
2014. overview. In: Nonlinear estimation and classification, pp. 149–
49. Srivastava, P., and Khare, A., Integration of wavelet transform, 171: Springer, 2003.
local binary patterns and moments for content-based image 54. Ahmed, K. T., Irtaza, A., and Iqbal, M. A., Fusion of local and
retrieval. J. Vis. Commun. Image Represent. 42:78–103, 2017. global features for effective image extraction. Appl. Intell. 1–18,
50. Jacobs, C. E., Finkelstein, A., and Salesin, D. H., Fast mul- 2017.
tiresolution image querying. In: Proceedings of the 22nd annual 55. Lin, C.-H., Chen, R.-T., and Chan, Y.-K., A smart content-based
conference on Computer graphics and interactive techniques, image retrieval system based on color and texture feature. Image
pp. 277–286: ACM, 1995. Vis. Comput. 27(6):658–665, 2009.

Affiliations
Rehan Ashraf1 · Mudassar Ahmed1 · Sohail Jabbar1 · Shehzad Khalid2 · Awais Ahmad3 · Sadia Din4 ·
Gwangil Jeon5

Mudassar Ahmed
[email protected]
Sohail Jabbar
[email protected]
Shehzad Khalid
shehzad [email protected]
Awais Ahmad
[email protected]
Sadia Din
[email protected]
Gwangil Jeon
[email protected]

1 Department of Computer Science, National Textile University,


Faisalabad, Pakistan
2 Department of Computer Engineering, Bahria University,
Islamabad, Pakistan
3 Department of Information and Communication Engineering,
Yeungnam University, Gyeongsan, Republic of Korea
4 Department of Computer Engineering, Kyungpook National
University, Daegu, South Korea
5 Department of Embedded System Engineering, Incheon National
University, Incheon, South Korea

You might also like