Sensors 23 08753 With Cover
Sensors 23 08753 With Cover
Article
Qaiser Abbas, Gulzar Ahmad, Tahir Alyas, Turki Alghamdi, Yazed Alsaawy and Ali Alzahrani
Special Issue
AI-IoT for New Challenges in Smart Cities
Edited by
Dr. Miguel Arevalillo-Herráez and Dr. Jaume Segura-Garcia
https://doi.org/10.3390/s23218753
sensors
Article
Revolutionizing Urban Mobility: IoT-Enhanced Autonomous
Parking Solutions with Transfer Learning for Smart Cities
Qaiser Abbas 1,2 , Gulzar Ahmad 3 , Tahir Alyas 4, * , Turki Alghamdi 1 , Yazed Alsaawy 1
and Ali Alzahrani 1
1 Faculty of Computer and Information Systems, Islamic University of Madinah, Madinah 42351, Saudi Arabia;
[email protected] (Q.A.); [email protected] (T.A.); [email protected] (Y.A.); [email protected] (A.A.)
2 Department of Computer Science & IT, University of Sargodha, Sargodha 40100, Pakistan
3 Department of Computer Science, University of South Asia, Lahore 54000, Pakistan;
[email protected]
4 Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan
* Correspondence: [email protected]
Abstract: Smart cities have emerged as a specialized domain encompassing various technologies,
transitioning from civil engineering to technology-driven solutions. The accelerated development of
technologies, such as the Internet of Things (IoT), software-defined networks (SDN), 5G, artificial
intelligence, cognitive science, and analytics, has played a crucial role in providing solutions for
smart cities. Smart cities heavily rely on devices, ad hoc networks, and cloud computing to integrate
and streamline various activities towards common goals. However, the complexity arising from
multiple cloud service providers offering myriad services necessitates a stable and coherent platform
for sustainable operations. The Smart City Operational Platform Ecology (SCOPE) model has been
developed to address the growing demands, and incorporates machine learning, cognitive correlates,
ecosystem management, and security. SCOPE provides an ecosystem that establishes a balance
for achieving sustainability and progress. In the context of smart cities, Internet of Things (IoT)
devices play a significant role in enabling automation and data capture. This research paper focuses
Citation: Abbas, Q.; Ahmad, G.; on a specific module of SCOPE, which deals with data processing and learning mechanisms for
Alyas, T.; Alghamdi, T.; Alsaawy, Y.;
object identification in smart cities. Specifically, it presents a car parking system that utilizes smart
Alzahrani, A. Revolutionizing Urban
identification techniques to identify vacant slots. The learning controller in SCOPE employs a two-tier
Mobility: IoT-Enhanced Autonomous
approach, and utilizes two different models, namely Alex Net and YOLO, to ensure procedural
Parking Solutions with Transfer
Learning for Smart Cities. Sensors
stability and improvement.
2023, 23, 8753. https://doi.org/
10.3390/s23218753 Keywords: cloud computing; IoT; smart city; performance; secure data management; modeling
Figure 1.
Figure 1. Comparison among LPWA
LPWA networks
networks and
and many
many other
other connectivity
connectivitytechnologies.
technologies.
Smart cities are complex ecosystems that involve many stakeholders (e.g., managed
service providers, network operators, logistic centers), and must work together to achieve
the best services. These ecosystems consist of the interactions between an environment
and organisms. The emergence of multiple types of ecosystems can be seen in the world
of online applications and electronics in which devices interact with one another. The
ecosystems composed of multiple connected devices can do most of the data processing on
their own and do not need any human intervention. Humans can interact with these devices
to instruct the respective actor and set them up. Therefore, many smart infrastructures such
as multi-sensor ecosystems are installed for collecting data on many roadsides [4].
Many other methods are used to collect data by using light detection and ranging
(LiDAR), global positioning system (GPS), variable messaging signs (VMSs), and inertial
measurement units (IMUs). The live information collected by these devices is then passed
Sensors 2023, 23, 8753 3 of 15
to the autonomous cloud gateway servers. One operational part of a city’s ecosystem are
the smart parking systems. In smart cities, the notion of automatic parking systems is
growing, and can enhance the comfort and safety of drivers. An automated parking lot
system helps drivers to park their cars quickly and safely without any issues.
Various machine learning (ML) and deep learning (DL) techniques have been used by
many researchers to develop a novel system for better results. Moreover, these consist of
machine learning models (such as support vector machines (svm), regression tree, random
forest), time series models (such as AutoRegressive Integrated Moving Average (ARIMA)
and AutoRegressive Moving Average (ARMA)), and ensemble techniques used to predict
various domains (such as bank fraudulent detection, spam mail detection, future decision-
making traffic congestion control). Artificial neural networks (ANN), which can learn
independently, perform nonlinear fitting, etc., have also been used for the said problem.
ML, DL, and ANN solve many problems in smart cities, such as smart buildings, roads,
and parking. Other than these machine learning (ML) techniques, the queuing theory has
also been used to predict the wait time before parking occupancy in parking lots, or in
many other areas such as pattern recognition, speech recognition, signal processing, and
control systems [5].
The development of smart cities is rapidly progressing, and is driven by the advance-
ments in technology, data connectivity, and intelligent systems. One crucial aspect of this
transformation is the effective utilization of wireless connections, which enable seamless
communication and data exchange in urban environments. However, while the role of
wireless connectivity is substantial, it is just one piece of the larger puzzle that consti-
tutes a smart city’s infrastructure and functionality. A parking system in a smart city is
a technologically advanced and integrated solution designed to optimize and streamline
the management of parking spaces within urban environments. These systems leverage
various technologies and data-driven approaches to address the challenges associated
with parking, such as congestion, limited availability of parking spots, and inefficient
space utilization.
The smart city concept represents a transformative approach to urban planning and
management, which harnesses technology, data, and innovative solutions to address
the complex challenges faced by modern urban centers. It aims to create more efficient,
sustainable, and livable cities by integrating various aspects of urban life with cutting-edge
technologies.
2. Literature Review
IoT-based applications use recent developments in communication technology, arti-
ficial intelligence, sensor devices, ubiquitous computing, and wireless sensor networks
(WSN). Cloud computing combined with the Internet of Things is speeding up the de-
velopment of solutions that enable us to monitor traffic movement in smart cities. Many
solutions have been devised to find parking spaces in smart cities to improve the quality
of life. To provide comfort, smart parking systems help drivers to find available and free
parking spaces, and they also keep in mind the number of free parking spaces available and
the distance between them. Machine learning and deep learning methods have brought
advancements and innovation in monitoring the mobility of vehicles in smart cities. Paidi
et al. [6] suggested that combining computer vision and deep learning techniques can
help find free parking lots. They discussed various techniques, technologies, and the
applicability of sensors that can locate the availability of free parking space. Cai et al.’s [7]
work was based on locating and measuring the traffic flow in parking lots with a novel
vehicle filter based on deep learning techniques. The proposed system provided better
accuracy as compared to other cheap industry benchmark systems. Vu and Huang [8]
proposed a combination of spatial transform and deep contrastive network to conclude the
investigation of parking space availability. The authors demonstrated that the technique
was robust for parking displacements, distortion, variations in car sizes, effects of spatial
variations, etc.
Zhang et al. [9] introduced a self-parking system based on deep learning techniques.
In their proposed system, they marked the parking points in the image and then classified
those points as occupied or free slots. Therefore, they developed an image database of park-
ing slots that contains 12,165 images of outdoor and indoor parking slots. Chen et al. [10]
reviewed the technologies of vision-based traffic semantic understanding in Intelligent
Transportation Systems (ITSs).
Sensors 2023, 23, 8753 5 of 15
Tekouabou et al. [11] introduced a combination of ensemble techniques and IoT devices
to predict the number of free parking slots in smart cities and evaluated their system
performance with the Birmingham parking dataset. They used the bagging ensemble
technique and achieved a 94% prediction accuracy. Luo et al. [12] addressed the challenge of
endogeneity in assessing the impact of Transport Infrastructure Connectivity (TIC) on local
conflict resolution. They introduced novel evidence of TIC’s effects on conflict resolution
through a natural experiment and the application of machine learning techniques, thus
mitigating the concern of endogeneity. Orrie et al. [13] described wireless communication
for the recommendations of the nearest parking spaces or reserve places with a GPS. After
every 2 min, the system transmits information about the availability of free spaces. If no
parking spaces are available, no actions are taken; on the other hand, within 2 km of their
location, any user can reserve a place. The user receives a message on their smartphones
with directions. If a car is parked in every slot, no action is performed; this application
requires a Wi-Fi connection.
Karthi et al. [14] introduced a system in which they used a database and cloud to
communicate to manage parking spaces in real time. The proposed system uses ultrasonic
sensors that are placed on the ground, is connected via the internet, and has a mobile
application for users to make reservations. Tabassum et al. [15] proposed, developed, and
assessed four classifiers: multinomial Naive Bayes, decision tree, logistic regression, and
random forest. The hyperparameters of the models were tuned, and it was concluded that
the random forest outperformed the other classifiers with a 91.73% test and 100% training
accuracy. The prediction systems based on neural networks have shown the importance
of various factors, such as the day of the week, time of the day, temperature, and location.
In contrast, traffic, events, rainfall, and vacation time play a secondary role. Therefore,
it is crucial to develop and implement educational campaigns that target both drivers
and pedestrians. Moreover, the differences between left- and right-hand driving and the
potential risks associated with this should also be highlighted [16].
AlexNet was first proposed in 2012 by Alex Krizhevsky, and it is a simple, fundamen-
tal, and effective convolutional neural network that is mainly composed of different layers,
namely convolutional layers, pooling layers, fully connected layers, and rectified linear
unit (ReLU) layers [17]. It consists mainly of eight layers, in which five are convolutional,
and three are fully connected. The first five convolutional layers extract the input features
to generate convolved feature maps. In the pooling layer, average or max pooling oper-
ations are used on the convolved feature maps within the given neighborhood window
to aggregate the information. AlexNet is booming due to its practical strategies, such as
the dropout regularization technique and the ReLU non-linearity layer. ReLU can prevent
overfitting and significantly accelerate the training phase, and it is a half-wave rectifier
function. YOLO was introduced in 2016 by Joseph Redmon et al.; it performed well in
object detection, and could detect objects in real time at 45 frames per second. There is also
a smaller version of YOLO called Fast YOLO, which performs at 155 frames per second. In
this study, we propose a mixed edge-based and cloud-based framework with the final goal
of PM2.5 value prediction. In order to validate the proposed approach, we evaluate the
quality of predictions using both original and preprocessed data on a real-world dataset
from air quality sensors distributed in Calgary, Canada. [18] YOLO-V3 uses darknet as
its backbone, and has a CNN with 53 layers; it is stacked with 53 more layers of CNNs,
making the total convolutional layers equal to 106. Traffic flow prediction methods often
depend on historical traffic data, including traffic volume and speed, but they may not be
well suited for high-capacity expressways or periods of peak traffic congestion [19].
In this study, a driving simulator is employed to create driving scenarios and examine
the driving performance of drivers with varying levels of experience in situations where
they are faced with traffic rule violations performed by other road users. The experimental
findings reveal that certain novice drivers disregard the positioning of their vehicles when
encountering traffic violations, resulting in collisions with other road users. Furthermore,
Sensors 2023, 23, 8753 6 of 15
some novice drivers can only execute either steering or braking to evade collisions in these
critical situations [20].
The obtained results show an average mean absolute percentage error improvement of
40.18% in the prediction accuracy by using the proposed preprocessing technique. Table 1
represents the year-wise key findings of different research focuses.
Figure 2.
Figure 2. SCOPE
SCOPE preprocessing
preprocessing and
and learning.
learning.
There are
There are 12,417
12,417 labeled
labeledimages
imagesininthe thePKLot
PKLot[26]
[26]parking
parking dataset. TheThe
dataset. images
images in the
in
dataset
the cover
dataset different
cover kinds
different kindsof of
climate
climate conditions,
conditions,such
suchasasrainy,
rainy,sunny,
sunny, and overcast
and overcast
preprocessing,ininterms
preprocessing, termsofofthe
the initial
initial set,
set, noise
noise reduction,
reduction, andand elimination
elimination of other
of other impu-
impurities
Sensors 2023, 23, x FOR PEER REVIEW 3).8 These
of 16
rities to make a refined dataset for tagging and solution development periods (Figure
to make a refined dataset for tagging and solution development periods (Figure 3).
These images
images presentpresent
distinctdistinct
features features
becausebecause the dataset
the dataset has different
has different parkingparking
lots. lots.
This dataset is segmented into two classes: empty parking space class and the occu-
pied space class. The total number of images after segmentation is 695,899, of which
337,780 (48.54%) comprise empty parking space images and 358,119 (51.46%) comprise
occupied space images. Figure 4b (empty sub image) and 4c (occupied sub image) show
the segmented parking spaces.
This dataset is segmented into two classes: empty parking space class and the oc-
cupied space class. The total number of images after segmentation is 695,899, of which
337,780 (48.54%) comprise empty parking space images and 358,119 (51.46%) comprise
Sunny Cloudy
occupied space Rainy
images. Figure 4b (empty sub image) and 4c (occupied sub image) show
the segmented parking spaces.
Figure 3. Various parking lots with different weather conditions.
4.
4. Performance
Performance Evaluation
Evaluation of
of the
the System
System
4.1. Phase-I Using SCOPE with AlexNet
4.1. Phase-I Using SCOPE with AlexNet
This
This pre-training
pre-training transfer
transfer mechanism
mechanism allows
allows the
the CNN
CNN network’s
network’s parameters
parameters to to be
be
transferred from the natural imagery dataset to the car parking dataset. AlexNet is pre-
transferred from the natural imagery dataset to the car parking dataset. AlexNet is pre-
trained
trained onon1000
1000image
imageclasses
classesand thethe
and lastlast
layers of AlexNet
layers can be
of AlexNet canmodified
be modifiedaccording to
accord-
ing to our dataset. The input layer of AlexNet only accepts RGB images
our dataset. The input layer of AlexNet only accepts RGB images with a size of 227 × 227 with a size of
227 × 227 × 3, therefore, the images will be resized according to the input layer.
× 3, therefore, the images will be resized according to the input layer. Each layer in this Each layer
in this network
network (e.g., convolutional
(e.g., convolutional layer, pooling
layer, pooling layer) haslayer) has a different
a different filter
filter size andsize
hasand has
its own
its own stride. According to the pre-trained AlexNet, every convolutional layer ends with
a max pooling layer that will generate the greatest value based on a specified filter size.
Each convolutional layer visualizes the object features in the images, such as texture,
angle, and the edge of the target images. Figure 5 shows the training graph using a
customized AlexNet pre-trained network. This graph shows the accuracy of the training
and validation dataset using five epochs with a 0.0001 learning rate. There are 625 total
iterations and 125 iterations per epoch. Figure 6 shows the graph of the training process
of the loss and validation of the dataset using five epochs with a 0.0001 learning rate. The
number of iterations and losses are shown along the x-axis and y-axis, respectively.
stride.stride.
Sensors 2023, 23, x FOR PEER REVIEW According
According
stride. to theto
According pre-trained
the
to the AlexNet,
pre-trained
pre-trained everyevery
AlexNet,
AlexNet, convolutional
everyconvolutional layer layer
convolutional ends 9with
ends
layer 16awith
of ends max a max
with a ma
Net, every convolutional layer ends with a max
pooling layer
pooling that
layer will
that generate
will the
generate greatest
the value
greatest based
value on
based a specified
on
pooling layer that will generate the greatest value based on a specified filter size. a filter
specified size.
filter size.
est valueSensors
based2023,
on a23,specified
8753 filter size. Each convolutional
Each convolutional
Each layer visualizes
convolutional layerlayer the object
visualizes the the
visualizes features
object in
features
object thein
features images,
the such
images,
in the images, as texture,
such 9 as
such texture,
of 15as texture
the object features in the images, such as texture,
angle, and
angle,
stride. the
and
angle,
Accordingedge
and of
thetothe
edge
thethe
edge target
of the
of the
pre-trained images.
targettarget
AlexNet, Figure
images.
images.
every 5convolutional
Figureshows
Figure the
5 shows training
5 shows the the
layer graph
training
ends training
with using
graph a cus-
using
graph
a max a cus-
using a cus
Figure 5 shows the training graph using a cus-
tomized
pooling AlexNet
tomized
tomized thatpre-trained
layerAlexNet
AlexNet generate network.
will pre-trained the greatest
pre-trained Thisvalue
network.
network.graph Thisshows
Thisbased
graph athe
onshows
graph accuracy
the the
specified
shows ofsize.
accuracy theof
filteraccuracy training
the and
training
of the andan
training
his graph shows the accuracy of the training and
validationEach convolutional
dataset
validation
validation usingusing
dataset layer
five visualizes
epochs
fivefive with
epochs theawith
object afeatures
0.0001 learning
0.0001 in the images,
rate.
learning There such
rate.rate. as
areThere
There texture,
625aretotal itera-
625 625totaltotal
itera-
the angle,
configuration
and the file,dataset
edge which
of per
using
may be
the target images.
epochs
changed aswith
Figure needed.a 0.0001
56 shows
learning
Thetraining
the 6000 iterations
graph
are
(max_batches)
using a cus-
itera
h a 0.0001 learning rate. There are 625 total tionsitera-
and
tions 125
and
tions iterations
125
and iterations
125 epoch.
iterationsper perFigure
epoch.epoch. 6 shows
Figure
Figure the
shows
6 graph
showsthe of
graph
the the
graph training
of the
of process
training
the training of
process the of
process the
of th
are tomized
identified to predict
AlexNet two classes
pre-trained network. (empty
This parking slots and occupied parking slots),
e 6 shows the graph of the training process loss of thevalidation
and
loss and
loss of theof
validation
and validation dataset
the
of using
dataset
the dataset fivegraph
using fiveshows
epochs
using with
epochs
five
theawith
epochs
accuracy
0.0001 of thelearning
learning
a 0.0001
with a 0.0001
training
rate.
learning
and
The num-
rate.rate.
TheThenum- num
andvalidation
steps of dataset
andusing fiveiterations
epochs with
4800 and 5400 (asa 0.0001
per the learning
policy) rate. There
equal toare80% 625andtotal90%itera-of the
ve epochs with a 0.0001 learning rate. The ber ofnum-
iterations
ber of iterations
ber losses
of iterations
iterations andand are
losses shown
are are
losses along
shown shown the x-axis
along the the and
x-axis y-axis,
andand respectively.
y-axis, respectively.
tions and 125The
max_batches. trainingper epoch. Figure 6 shows
configuration file along
the graph
uses x-axis
a network sizey-axis,
of the training process
width respectively.
= of416 theand a
ng the x-axis and y-axis, respectively.
loss =
height and
416,validation
which means of the dataset usingimage
that every five epochs
will bewith a 0.0001
resized tolearning
the network’srate. Thesize num- during
ber ofand
training iterations and losses are shown along the x-axis and y-axis, respectively.
detection.
Figure
Figure 5.
5. The The trainingprocess
training process ofof
of accuracy
accuracy and and
validation of the proposed model with transfer learn-
Figure Figure
5. The 5. The training
training
Figure 5. The process
trainingprocess ofand
of accuracy
accuracy
process validation
and
accuracyvalidation
and of the
validationof proposed
of
thethe
validation the model
proposed
proposed
of with
model
model
proposed transfer
with
with
model learn-
transfer
transfer
with learn-
transfer learn
ing (AlexNet). The light blue curve ( ) represents training accuracy and its smooth training
validation of the proposed model with transfer ing (AlexNet).
learn-
learning The light
ing (AlexNet).
ing
(AlexNet). Theblue
(AlexNet).
The lightcurve
The
light light
blue (curve
blue
curve ( ( ) represents
curve
( ) training
) ) represents
represents accuracy
training
training
training and
accuracy itsand
accuracy
accuracy smooth
andandits training
its smooth
its smooth
smooth training
trainin
accuracy curve is shown using the dark blue curve ( ). Further, the black curve ( )
) represents training accuracy and its smooth accuracy
training
training curve
accuracy theis
accuracy
accuracy
represents shown
curvecurve
curve
training
isusing
is shown
isshown
shown
validation theofdark
using using
using
the blue
theproposed
dark curve
thedark
the blue
dark (curve
blue
blue
model. ( ().
curve
curve ( Further, ).the
). Further,
). black
the curve
Further,
Further, black
the (curve
theblack
black ( ()
curve
curve )
curve ( ). Further, the black curve ((represents the training
represents
))represents
represents thevalidation
the training
the training
training of the of
validation proposed
validation
validation the
of the model.
ofproposed
the proposed
proposed model.
model.
model.
osed model.
Figure 6.
Figure 6. The
The training
training process
process of
of loss
loss and
and validation
validation of of the
the proposed
proposed model
model with
with transfer
transfer learning
learning
(AlexNet). The
(AlexNet). The light
light orange
orange curve
curve (( )) represents
represents thethe number
number ofof losses
losses and
and its
its smooth
smooth loss
loss
curve is
curve is shown
shown using
using the
the dark
dark orange
orange line
line (( ).
). Lastly,
Lastly, the
the black
black curve
curve (( )) represents
represents
the loss
the loss validation
validation ofof the
the proposed
proposedmodel.
model.
Figure 7. The
Figure 7. The training
training process
process of
of the
the proposed
proposed model
model using
using YOLOv3.
YOLOv3.
The learning rate (learning_rate = 0.001) is a hyperparameter that adjusts and controls
In this research, the performance of the proposed model is measured using accuracy,
the weights of the network. The learning rate needs to be high at the beginning of the
false negative rate (FNR), true positive rate (TPR), true negative rate (TNR), positive pre-
training process. Once you set the learning rate value, train the model, and wait for the
dictive value (PPV), negative predictive value (NPV), false positive rate (FPR), false dis-
learning rate to eventually decrease over time and enable the model to converge. The
covery rate (FDR), and F1-Score.
learning rate which decreases the policy is mentioned in the configuration file. The blue
The performance of the proposed model is evaluated based on the counts of valida-
curve in Figure 7 shows the training loss, and the red curve shows the mean average
tion records correctly and incorrectly predicted by the proposed trained model. The accu-
precision (mAP), which is 99.9%.
racy In
of this
the research,
proposedthe model provides of
performance the
theinformation
proposed modelabout ishow many images
measured are cor-
using accuracy,
rectly classified in the confusion matrix by using the trained proposed model,
false negative rate (FNR), true positive rate (TPR), true negative rate (TNR), positive as shown
in Equation (1).
predictive value (PPV), negative predictive value (NPV), false positive rate (FPR), false
discovery rate (FDR), and F1-Score. 𝜏 +𝜏
The performance of the proposed 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦
model is=evaluated based on the counts of validation (1)
𝑁
records correctly and incorrectly predicted by the proposed trained model. The accuracy
of theThe error rate
proposed or miss
model rate orthe
provides false negative rate
information (FNR)
about howofmany
the proposed model
images are is cal-
correctly
classified in the confusion matrix by using the trained proposed model, as shownare
culated using Equation (2), and provides the information about how many images in
incorrectly
Equation identified in the confusion matrix.
(1).
τρ +ℱ τ+ν ℱ
Accuracy
𝑀𝑖𝑠𝑠 𝑟𝑎𝑡𝑒 == (2)
(1)
N 𝑁
The
The error
other rate or of
metric miss
therate or false
measure of negative
performance rate is
(FNR) of the or
sensitivity proposed
recall ormodel
the trueis
calculated using Equation (2), and provides the information
positive rate (TPR), and is calculated with Equation (3). about how many images are
incorrectly identified in the confusion matrix.
𝜏
𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 F = +F (3)
ρ 𝜏 +ν ℱ
Miss rate = (2)
N
Sensors 2023, 23, 8753 11 of 15
The other metric of the measure of performance is sensitivity or recall or the true
positive rate (TPR), and is calculated with Equation (3).
τρ
Sensitivity = (3)
τρ + Fν
One more performance measure metric which is used is specificity or the true negative
rate (TNR), and it is measured with Equation (4).
τν
Speci f icity = (4)
τν + Fρ
The precision or positive predictive value of the proposed model is measured with
Equation (5).
τρ
Precision = (5)
τρ + Fρ
Equation (6) is used to find out the negative predictive value (NPV) of the proposed
model.
τν
NPV = (6)
(τν + Fν )
The false positive rate (FPR) of fallout of the proposed model is measured with
Equation (7)
Fρ
FPR = (7)
τν + Fρ
Equation (8) represents the false discovery rate (FDR) of the proposed model.
Fρ
FDR = (8)
Fρ + τρ
F1-Score is the import metric used to evaluate the proposed model. It is based on
precision and recall, and is calculated by taking the geometric mean of recall and precision
as shown in Equation (9).
2 ∗ TPR ∗ PPV
F1 − Score = (9)
TPR + PPV
Table 2 shows the confusion matrix from the training phase of the proposed model
used for automated parking lot detection prediction. These metrics are applied in the
training and validation dataset, which is divided into 80% for the training dataset and 20%
for the validation dataset. In this study, 80% of the training data set is used for building
the proposed model and 20% of the dataset is used for measuring the proposed model’s
accuracy. The results of the training and validation dataset in the form of a confusion matrix
are shown in Tables 2–4. There are 20,000 randomly selected images from the parking lot
dataset which are used for transfer learning with the use of a customized AlexNet network,
in which 16,000 (80%) images are used for training and 4000 (20%) images are used to
validate the training model, as shown in the respective tables.
Table 2. Confusion matrix of the training of the proposed model during the prediction of automated
parking lot detection.
Table 3. Confusion matrix of the validation of the proposed model using AlexNet during prediction
of automated parking lot detection.
Table 4. Confusion matrix of the validation of the proposed model using YOLO during the prediction
of automated parking lot detection.
Evaluation of the proposed model (transfer learning with AlexNet) using training and
validation data, which employs various statistical measures for performance assessment as
shown in Table 5.
Table 5. Performance evaluation of proposed model (transfer learning with AlexNet) using training
and validation data with different statistical measures.
Evaluation of the proposed model (transfer learning with YOLO) using training and
validation data, which employs various statistical measures for performance assessment as
shown in Table 6.
Table 6. Performance evaluation of proposed model (transfer learning with YOLO) using validation
data with different statistical measures.
The description of the confusion matrix is shown in Table 1, and the results are shown
below.
• True positive τρ = 7991; the model accurately classified 7991 images in the empty lot
class out of 8000 images.
• True negative τν = 7988; the model accurately classified 7988 images in the empty lot
class out of 8000 images.
• False positive Fρ = 12; consequently, the model mistakenly identified 12 images of
the occupied lot class as the empty lot class.
• False negative (Fν ) = 9; consequently, the model mistakenly identified 9 images of the
empty lot class as the occupied lot class.
A total of 7991 and 7988 images were correctly identified as the empty lot and occupied
lot classes, respectively.
Sensors 2023, 23, 8753 13 of 15
Tables 6 and 7 represent the proposed model’s measurements by using Equations (1)–(9).
The accuracy, FNR, TPR, TNR, PPV, NPV, FPR, FDR, and F1-Score metrics of training and
validation dataset of the proposed model are shown in Table 6. The rapid advancement
of intelligent connected technologies and cellular vehicle-to-everything communication
(C-V2X) presents new opportunities for addressing the challenges of connected auto-
mated vehicles (CAVs) at continuous signalized intersections, especially in the context of
ecodriving [28].
Table 7. The performance comparison of the proposed model with approaches in the literature.
Author Contributions: Conceptualization, T.A. (Turki Alghamdi) and Y.A.; Methodology, Q.A.;
Formal analysis, G.A.; Writing—original draft, T.A. (Tahir Alyas); Visualization, A.A. All authors
have read and agreed to the published version of the manuscript.
Funding: This research is supported by the Deanship of Scientific Research, Islamic University of
Madinah, KSA.
Institutional Review Board Statement: Not Applicable.
Informed Consent Statement: Not Applicable.
Data Availability Statement: The data used in this paper can be requested from the corresponding
authors upon request.
Acknowledgments: The authors participated in collecting the datasets and results in this study.
Conflicts of Interest: The authors declare no conflict of interest regarding the publication of
this work.
Sensors 2023, 23, 8753 14 of 15
References
1. Fang, Y.; Min, H.; Wu, X.; Wang, W.; Zhao, X.; Mao, G. On-Ramp Merging Strategies of Connected and Automated Vehicles
Considering Communication Delay. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15298–15312. [CrossRef]
2. Belissent, J. Getting clever about smart cities: New opportunities require new business models. Camb. Mass. 2010, 193, 244–277.
3. Draz, U.; Ali, T.; Khan, J.A.; Majid, M.; Yasin, S. Areal-time smart dumpsters monitoring and garbage collection system. In
Proceedings of the 2017 Fifth International Conference on Aerospace Science & Engineering (ICASE), Islamabad, Pakistan, 14–16
November 2017; IEEE: Piscataway, NJ, USA, 2017.
4. Yue, W.; Li, C.; Wang, S.; Xue, N.; Wu, J. Cooperative Incident Management in Mixed Traffic of CAVs and Human-Driven Vehicles.
IEEE Trans. Intell. Transp. Syst. 2023. [CrossRef]
5. Safi, Q.G.K.; Luo, S.; Pan, L.; Liu, W.; Hussain, R.; Bouk, S.H. SVPS: Cloud-based smart vehicle parking system over ubiquitous
VANETs. Comput. Netw. 2018, 138, 18–30. [CrossRef]
6. Paidi, V.; Fleyeh, H.; Håkansson, J.; Nyberg, R.G. Smart parking sensors, technologies and applications for open parking lots: A
review. IET Intell. Transp. Syst. 2018, 12, 735–741. [CrossRef]
7. Cai, B.Y.; Alvarez, R.; Sit, M.; Duarte, F.; Ratti, C. Deep Learning-Based Video System for Accurate and Real-Time Parking
Measurement. IEEE Internet Things J. 2019, 6, 7693–7701. [CrossRef]
8. Vu, H.T.; Huang, C.-C. Parking space status inference upon a deep CNN and multi-task contrastive network with spatial
transform. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 1194–1208. [CrossRef]
9. Zhang, L.; Huang, J.; Li, X.; Xiong, L. Vision-based parking-slot detection: A DCNN-based approach and a large-scale benchmark
dataset. IEEE Trans. Image Process. 2018, 27, 5350–5364. [CrossRef]
10. Chen, J.; Wang, Q.; Cheng, H.H.; Peng, W.; Xu, W. A Review of Vision-Based Traffic Semantic Understanding in ITSs. IEEE Trans.
Intell. Transp. Syst. 2022, 23, 19954–19979. [CrossRef]
11. Tekouabou, S.C.K.; Cherif, W.; Silkan, H. Improving parking availability prediction in smart cities with IoT and ensemble-based
model. J. King Saud Univ. Comput. Inf. Sci. 2020, 34, 687–697.
12. Luo, J.; Wang, G.; Li, G.; Pesce, G. Transport infrastructure connectivity and conflict resolution: A machine learning analysis.
Neural Comput. Appl. 2022, 34, 6585–6601. [CrossRef]
13. Orrie, O.; Silva, B.; Hancke, G.P. A Wireless Smart Parking System. In Proceedings of the 41st Annual Conference of the IEEE
Industrial Electronics Society (IECON), Yokohama, Japan, 9–12 November 2015.
14. Karthi, M.; Preethi, H. Smart Parking with Reservation in Cloud based environment. In Proceedings of the 2016 IEEE International
Conference on Cloud Computing in Emerging Markets, Bangalore, India, 19–21 October 2016; pp. 164–167.
15. Tabassum, N.; Namoun, A.; Alyas, T.; Tufail, A.; Taqi, M.; Kim, K.-H. Classification of Bugs in Cloud Computing Applications
Using Machine Learning Techniques. Appl. Sci. 2023, 13, 2880. [CrossRef]
16. Xu, J.; Guo, K.; Zhang, X.; Sun, P.Z.H. Left Gaze Bias between LHT and RHT: A Recommendation Strategy to Mitigate Human
Errors in Left- and Right-Hand Driving. IEEE Trans. Intell. Veh. 2023. [CrossRef]
17. Arora, D.; Garg, M.; Gupta, M. Diving deep in Deep Convolutional Neural Network. In Proceedings of the 2020 2nd International
Conference on Advances in Computing, Communication Control and Networking (ICACCCN), Greater Noida, India, 18–19
December 2020; pp. 749–751.
18. Ojagh, S.; Cauteruccio, F.; Terracina, G.; Liang, S.H. Enhanced air quality prediction by edge-based spatiotemporal data
preprocessing. Comput. Electr. Eng. 2021, 96 Pt B, 107572. [CrossRef]
19. Chen, J.; Xu, M.; Xu, W.; Li, D.; Peng, W.; Xu, H. A Flow Feedback Traffic Prediction Based on Visual Quantified Features. IEEE
Trans. Intell. Transp. Syst. 2023, 24, 10067–10075. [CrossRef]
20. Xu, J.; Guo, K.; Sun, P.Z.H. Driving Performance Under Violations of Traffic Rules: Novice Vs. Experienced Drivers. IEEE Trans.
Intell. Veh. 2022, 7, 908–917. [CrossRef]
21. Assim, M.; Al-Omary, A. A survey of IoT-based smart parking systems in smart cities. In Proceedings of the 3rd Smart Cities
Symposium (SCS 2020), Online Conference, 21–23 September 2020; pp. 35–38. [CrossRef]
22. Takehara, R.; Gonsalves, T. Autonomous Car Parking System using Deep Reinforcement Learning. In Proceedings of the 2021 2nd
International Conference on Innovative and Creative Information Technology (ICITech), Salatiga, Indonesia, 23–25 September
2021; pp. 85–89. [CrossRef]
23. Sajna, S.; Nair, R.R. Learning-Based Smart Parking System. In Proceedings of the International Conference on Computational Intelligence.
Algorithms for Intelligent Systems; Tiwari, R., Pavone, M.F., Ravindranathan Nair, R., Eds.; Springer: Singapore, 2023. [CrossRef]
24. Iqbal, K.; Abbas, S.; Khan, M.A.; Ather, A.; Khan, M.S.; Fatima, A.; Ahmad, G. Autonomous Parking-Lots Detection with
Multi-Sensor Data Fusion Using Machine Deep Learning Techniques. CMC-Comput. Mater. Contin. 2021, 66, 1595–1612.
[CrossRef]
25. Chen, Z.; Wang, X.; Zhang, W.; Yao, G.; Li, D.; Zeng, L. Autonomous Parking Space Detection for Electric Vehicles Based on
Improved YOLOV5-OBB Algorithm. World Electr. Veh. J. 2023, 14, 276. [CrossRef]
26. Almeida, P.; Oliveira, L.S.; Silva, E., Jr.; Britto, A., Jr.; Koerich, A. PKLot—A robust dataset for parking lot classification. Expert Syst.
Appl. 2015, 42, 4937–4949. Available online: https://www.kaggle.com/datasets/ammarnassanalhajali/pklot-dataset (accessed
on 1 October 2023). [CrossRef]
Sensors 2023, 23, 8753 15 of 15
27. Ma, X.; Dong, Z.; Quan, W.; Dong, Y.; Tan, Y. Real-time assessment of asphalt pavement moduli and traffic loads using monitoring
data from Built-in Sensors: Optimal sensor placement and identification algorithm. Mech. Syst. Signal Process. 2023, 187, 109930.
[CrossRef]
28. Zhang, X.; Fang, S.; Shen, Y.; Yuan, X.; Lu, Z. Hierarchical Velocity Optimization for Connected Automated Vehicles With Cellular
Vehicle-to-Everything Communication at Continuous Signalized Intersections. IEEE Trans. Intell. Transp. Syst. 2023. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.