Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

3.9 6.

Article

Revolutionizing Urban Mobility: IoT-


Enhanced Autonomous Parking
Solutions with Transfer Learning for
Smart Cities

Qaiser Abbas, Gulzar Ahmad, Tahir Alyas, Turki Alghamdi, Yazed Alsaawy and Ali Alzahrani

Special Issue
AI-IoT for New Challenges in Smart Cities
Edited by
Dr. Miguel Arevalillo-Herráez and Dr. Jaume Segura-Garcia

https://doi.org/10.3390/s23218753
sensors
Article
Revolutionizing Urban Mobility: IoT-Enhanced Autonomous
Parking Solutions with Transfer Learning for Smart Cities
Qaiser Abbas 1,2 , Gulzar Ahmad 3 , Tahir Alyas 4, * , Turki Alghamdi 1 , Yazed Alsaawy 1
and Ali Alzahrani 1

1 Faculty of Computer and Information Systems, Islamic University of Madinah, Madinah 42351, Saudi Arabia;
[email protected] (Q.A.); [email protected] (T.A.); [email protected] (Y.A.); [email protected] (A.A.)
2 Department of Computer Science & IT, University of Sargodha, Sargodha 40100, Pakistan
3 Department of Computer Science, University of South Asia, Lahore 54000, Pakistan;
[email protected]
4 Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan
* Correspondence: [email protected]

Abstract: Smart cities have emerged as a specialized domain encompassing various technologies,
transitioning from civil engineering to technology-driven solutions. The accelerated development of
technologies, such as the Internet of Things (IoT), software-defined networks (SDN), 5G, artificial
intelligence, cognitive science, and analytics, has played a crucial role in providing solutions for
smart cities. Smart cities heavily rely on devices, ad hoc networks, and cloud computing to integrate
and streamline various activities towards common goals. However, the complexity arising from
multiple cloud service providers offering myriad services necessitates a stable and coherent platform
for sustainable operations. The Smart City Operational Platform Ecology (SCOPE) model has been
developed to address the growing demands, and incorporates machine learning, cognitive correlates,
ecosystem management, and security. SCOPE provides an ecosystem that establishes a balance
for achieving sustainability and progress. In the context of smart cities, Internet of Things (IoT)
devices play a significant role in enabling automation and data capture. This research paper focuses
Citation: Abbas, Q.; Ahmad, G.; on a specific module of SCOPE, which deals with data processing and learning mechanisms for
Alyas, T.; Alghamdi, T.; Alsaawy, Y.;
object identification in smart cities. Specifically, it presents a car parking system that utilizes smart
Alzahrani, A. Revolutionizing Urban
identification techniques to identify vacant slots. The learning controller in SCOPE employs a two-tier
Mobility: IoT-Enhanced Autonomous
approach, and utilizes two different models, namely Alex Net and YOLO, to ensure procedural
Parking Solutions with Transfer
Learning for Smart Cities. Sensors
stability and improvement.
2023, 23, 8753. https://doi.org/
10.3390/s23218753 Keywords: cloud computing; IoT; smart city; performance; secure data management; modeling

Academic Editors: Miguel


Arevalillo-Herráez and
Jaume Segura-Garcia
1. Introduction
Received: 14 September 2023 The concept of a smart city has emerged with multiple new challenges and opportuni-
Revised: 14 October 2023 ties in IT governance, development, security, and emerging technologies. Digital systems
Accepted: 23 October 2023 for smart cities are becoming a new breed of software with real-time updates, connectivity,
Published: 27 October 2023
and functionality. It is, therefore, highly desirable to formulate such frameworks and
models that support the concept and functionality of a smart city [1].
A smart city is a well-defined and new concept that many institutions and researchers
Copyright: © 2023 by the authors.
focus on nowadays. Inappropriate parking of vehicles at parking spots may lead to a
Licensee MDPI, Basel, Switzerland. deadlock situation for the rest of the other vehicles. It is an issue for the entire world to
This article is an open access article handle such a situation. The problem is in locating an appropriate parking spot in minimal
distributed under the terms and time and by using fewer resources and without wasting time, ensuring safe and secure
conditions of the Creative Commons parking. By using the Internet of Things, artificial intelligence, and other communication
Attribution (CC BY) license (https:// devices, this problem can be resolved [2].
creativecommons.org/licenses/by/ Internet of Things (IoT) and artificial intelligence are the major research areas used
4.0/). to solve challenges related to transportation and other smart city problems. IoT refers

Sensors 2023, 23, 8753. https://doi.org/10.3390/s23218753 https://www.mdpi.com/journal/sensors


munication, networking, and data-linking protocols.
The rapid development of the Internet of Things (IoT) enables ubiquitous connectiv-
ity among various machines through wireless communication, significantly impacting
people’s daily life in many domains, such as smart cities, smart homes, garbage monitor-
Sensors 2023, 23, 8753 ing, smart parking, smart transportation, etc. The surrounding information is sensed 2 ofby
15
IoT devices and shared with people for efficient services and among themselves. In smart
parking, short-term wireless networks are used. Multiple techniques such as Bluetooth,
ZigBee,
to Wi-Fi, long-term
the services evolutiondevices,
of interconnected (LTE), etc. achieve
people, efficientand
networks, communication
other valuable [3].things
These
methods can provide reliable communications and high-speed data
that are provided with radio frequency identification (RFID). Publishing data onto the transmissions be-
tween IoT devices. The Low Power Wide Area (LPWA) networks,
cloud requires no human-to-human interaction. These IoT-enabled devices use multiplewhich employ a novel
wireless protocol,networking,
communication, are also veryandfamous in the field
data-linking of transmission in the current era due
protocols.
to their
Thelong-range communication
rapid development at low of
of the Internet power.
Things They
(IoT)offer highubiquitous
enables energy efficiency,
connectivitylow
power consumption,
among various machines andthrough
high coverage
wirelesscapabilities.
communication, Figure 1 compares
significantly LPWA networks
impacting people’s
and many
daily life in other
many connectivity
domains, such technologies regarding
as smart cities, power garbage
smart homes, consumption, bandwidth,
monitoring, smart
cost, and range.
parking, smart transportation, etc. The surrounding information is sensed by IoT devices
Smart cities
and shared with are complex
people ecosystems
for efficient that and
services involve
amongmany stakeholders
themselves. (e.g., managed
In smart parking,
service providers,
short-term wirelessnetwork
networksoperators,
are used.logistic centers),
Multiple and must
techniques work
such as together
Bluetooth, to achieve
ZigBee,
the best
Wi-Fi, services.evolution
long-term These ecosystems
(LTE), etc.consist
achieveofefficient
the interactions
communicationbetween [3].an environment
These methods
and provide
can organisms. The emergence
reliable of multiple
communications and types of ecosystems
high-speed can be seen between
data transmissions in the world IoT
of online applications and electronics in which devices interact with one another. The eco-
devices. The Low Power Wide Area (LPWA) networks, which employ a novel wireless
systems composed
protocol, of multiple
are also very famous in connected
the field devices can do most
of transmission in theofcurrent
the dataera processing
due to their on
their own and
long-range do not need any
communication human
at low intervention.
power. They offerHumans can interact
high energy efficiency,withlow these
powerde-
vices to instruct
consumption, thehigh
and respective
coverageactor and set them
capabilities. up. Therefore,
Figure 1 comparesmany LPWA smart infrastruc-
networks and
tures such
many otheras multi-sensor
connectivity ecosystemsregarding
technologies are installed
powerforconsumption,
collecting data on many roadsides
bandwidth, cost, and
[4].
range.

Figure 1.
Figure 1. Comparison among LPWA
LPWA networks
networks and
and many
many other
other connectivity
connectivitytechnologies.
technologies.

Smart cities are complex ecosystems that involve many stakeholders (e.g., managed
service providers, network operators, logistic centers), and must work together to achieve
the best services. These ecosystems consist of the interactions between an environment
and organisms. The emergence of multiple types of ecosystems can be seen in the world
of online applications and electronics in which devices interact with one another. The
ecosystems composed of multiple connected devices can do most of the data processing on
their own and do not need any human intervention. Humans can interact with these devices
to instruct the respective actor and set them up. Therefore, many smart infrastructures such
as multi-sensor ecosystems are installed for collecting data on many roadsides [4].
Many other methods are used to collect data by using light detection and ranging
(LiDAR), global positioning system (GPS), variable messaging signs (VMSs), and inertial
measurement units (IMUs). The live information collected by these devices is then passed
Sensors 2023, 23, 8753 3 of 15

to the autonomous cloud gateway servers. One operational part of a city’s ecosystem are
the smart parking systems. In smart cities, the notion of automatic parking systems is
growing, and can enhance the comfort and safety of drivers. An automated parking lot
system helps drivers to park their cars quickly and safely without any issues.
Various machine learning (ML) and deep learning (DL) techniques have been used by
many researchers to develop a novel system for better results. Moreover, these consist of
machine learning models (such as support vector machines (svm), regression tree, random
forest), time series models (such as AutoRegressive Integrated Moving Average (ARIMA)
and AutoRegressive Moving Average (ARMA)), and ensemble techniques used to predict
various domains (such as bank fraudulent detection, spam mail detection, future decision-
making traffic congestion control). Artificial neural networks (ANN), which can learn
independently, perform nonlinear fitting, etc., have also been used for the said problem.
ML, DL, and ANN solve many problems in smart cities, such as smart buildings, roads,
and parking. Other than these machine learning (ML) techniques, the queuing theory has
also been used to predict the wait time before parking occupancy in parking lots, or in
many other areas such as pattern recognition, speech recognition, signal processing, and
control systems [5].
The development of smart cities is rapidly progressing, and is driven by the advance-
ments in technology, data connectivity, and intelligent systems. One crucial aspect of this
transformation is the effective utilization of wireless connections, which enable seamless
communication and data exchange in urban environments. However, while the role of
wireless connectivity is substantial, it is just one piece of the larger puzzle that consti-
tutes a smart city’s infrastructure and functionality. A parking system in a smart city is
a technologically advanced and integrated solution designed to optimize and streamline
the management of parking spaces within urban environments. These systems leverage
various technologies and data-driven approaches to address the challenges associated
with parking, such as congestion, limited availability of parking spots, and inefficient
space utilization.
The smart city concept represents a transformative approach to urban planning and
management, which harnesses technology, data, and innovative solutions to address
the complex challenges faced by modern urban centers. It aims to create more efficient,
sustainable, and livable cities by integrating various aspects of urban life with cutting-edge
technologies.

1.1. Problem Statement


In the context of smart cities, urban congestion and limited parking spaces have
become pressing challenges. Traditional parking management systems often fall short in ef-
ficiently utilizing available parking spaces and providing a seamless experience for drivers.
To address this issue, this research focuses on developing an IoT-based autonomous park-
ing scenario that leverages transfer learning techniques. The primary problem addressed
within this research paper is the need for a more efficient and seamless parking system
that not only optimizes parking space usage but also contributes to reducing urban traffic
congestion and enhancing the quality of urban life.

1.2. Research Motivations


This research has formulated the following research motivations:
a Rapid urbanization has led to a surge in the number of vehicles on the road, resulting
in chronic traffic congestion in many cities.
b The advancement of Internet of Things (IoT) technology presents an opportunity to
revolutionize urban transportation and parking management.
c The potential for machine learning and transfer learning techniques to adapt and
optimize autonomous parking systems across different smart city environments is a
compelling avenue for exploration.
Sensors 2023, 23, 8753 4 of 15

1.3. Significance of Our Study


This research aims to develop an innovative and efficient IoT-based autonomous
parking system that enhances parking space utilization, reduces traffic congestion, and
promotes the sustainable development of smart cities. This study’s significance lies in its
potential to transform the way cities manage parking and urban mobility, ultimately leading
to more sustainable, efficient, and user-centric smart cities. Alleviating traffic congestion
is a critical concern in modern urban planning and transportation management. It refers
to the efforts and strategies aimed at reducing or mitigating the congestion of vehicles
on road networks during peak hours. Traffic congestion can lead to numerous negative
consequences, including increased travel time, environmental pollution, fuel consumption,
and stress, for commuters. The outcomes of our research have the potential to benefit not
only the residents and visitors of smart cities but also the global urban community facing
similar challenges.

1.4. Research Objectives


The following objectives are defined for this research:
a Develop a robust module for an IoT-based autonomous parking system, dedicated
to real-time data collection, analysis, and decision making. By leveraging advanced
sensors and analytics, it will enable an automated detection of vacant and occupied
parking spaces, improving the user experience and reducing the time spent searching
for parking.
b Explore and apply transfer learning techniques to adapt the autonomous parking system
to different smart city environments, promoting scalability and ease of deployment.
c Conduct extensive testing and evaluation of the developed system in real-world smart
city environments to assess its effectiveness in optimizing parking space utilization
and reducing traffic congestion.

2. Literature Review
IoT-based applications use recent developments in communication technology, arti-
ficial intelligence, sensor devices, ubiquitous computing, and wireless sensor networks
(WSN). Cloud computing combined with the Internet of Things is speeding up the de-
velopment of solutions that enable us to monitor traffic movement in smart cities. Many
solutions have been devised to find parking spaces in smart cities to improve the quality
of life. To provide comfort, smart parking systems help drivers to find available and free
parking spaces, and they also keep in mind the number of free parking spaces available and
the distance between them. Machine learning and deep learning methods have brought
advancements and innovation in monitoring the mobility of vehicles in smart cities. Paidi
et al. [6] suggested that combining computer vision and deep learning techniques can
help find free parking lots. They discussed various techniques, technologies, and the
applicability of sensors that can locate the availability of free parking space. Cai et al.’s [7]
work was based on locating and measuring the traffic flow in parking lots with a novel
vehicle filter based on deep learning techniques. The proposed system provided better
accuracy as compared to other cheap industry benchmark systems. Vu and Huang [8]
proposed a combination of spatial transform and deep contrastive network to conclude the
investigation of parking space availability. The authors demonstrated that the technique
was robust for parking displacements, distortion, variations in car sizes, effects of spatial
variations, etc.
Zhang et al. [9] introduced a self-parking system based on deep learning techniques.
In their proposed system, they marked the parking points in the image and then classified
those points as occupied or free slots. Therefore, they developed an image database of park-
ing slots that contains 12,165 images of outdoor and indoor parking slots. Chen et al. [10]
reviewed the technologies of vision-based traffic semantic understanding in Intelligent
Transportation Systems (ITSs).
Sensors 2023, 23, 8753 5 of 15

Tekouabou et al. [11] introduced a combination of ensemble techniques and IoT devices
to predict the number of free parking slots in smart cities and evaluated their system
performance with the Birmingham parking dataset. They used the bagging ensemble
technique and achieved a 94% prediction accuracy. Luo et al. [12] addressed the challenge of
endogeneity in assessing the impact of Transport Infrastructure Connectivity (TIC) on local
conflict resolution. They introduced novel evidence of TIC’s effects on conflict resolution
through a natural experiment and the application of machine learning techniques, thus
mitigating the concern of endogeneity. Orrie et al. [13] described wireless communication
for the recommendations of the nearest parking spaces or reserve places with a GPS. After
every 2 min, the system transmits information about the availability of free spaces. If no
parking spaces are available, no actions are taken; on the other hand, within 2 km of their
location, any user can reserve a place. The user receives a message on their smartphones
with directions. If a car is parked in every slot, no action is performed; this application
requires a Wi-Fi connection.
Karthi et al. [14] introduced a system in which they used a database and cloud to
communicate to manage parking spaces in real time. The proposed system uses ultrasonic
sensors that are placed on the ground, is connected via the internet, and has a mobile
application for users to make reservations. Tabassum et al. [15] proposed, developed, and
assessed four classifiers: multinomial Naive Bayes, decision tree, logistic regression, and
random forest. The hyperparameters of the models were tuned, and it was concluded that
the random forest outperformed the other classifiers with a 91.73% test and 100% training
accuracy. The prediction systems based on neural networks have shown the importance
of various factors, such as the day of the week, time of the day, temperature, and location.
In contrast, traffic, events, rainfall, and vacation time play a secondary role. Therefore,
it is crucial to develop and implement educational campaigns that target both drivers
and pedestrians. Moreover, the differences between left- and right-hand driving and the
potential risks associated with this should also be highlighted [16].
AlexNet was first proposed in 2012 by Alex Krizhevsky, and it is a simple, fundamen-
tal, and effective convolutional neural network that is mainly composed of different layers,
namely convolutional layers, pooling layers, fully connected layers, and rectified linear
unit (ReLU) layers [17]. It consists mainly of eight layers, in which five are convolutional,
and three are fully connected. The first five convolutional layers extract the input features
to generate convolved feature maps. In the pooling layer, average or max pooling oper-
ations are used on the convolved feature maps within the given neighborhood window
to aggregate the information. AlexNet is booming due to its practical strategies, such as
the dropout regularization technique and the ReLU non-linearity layer. ReLU can prevent
overfitting and significantly accelerate the training phase, and it is a half-wave rectifier
function. YOLO was introduced in 2016 by Joseph Redmon et al.; it performed well in
object detection, and could detect objects in real time at 45 frames per second. There is also
a smaller version of YOLO called Fast YOLO, which performs at 155 frames per second. In
this study, we propose a mixed edge-based and cloud-based framework with the final goal
of PM2.5 value prediction. In order to validate the proposed approach, we evaluate the
quality of predictions using both original and preprocessed data on a real-world dataset
from air quality sensors distributed in Calgary, Canada. [18] YOLO-V3 uses darknet as
its backbone, and has a CNN with 53 layers; it is stacked with 53 more layers of CNNs,
making the total convolutional layers equal to 106. Traffic flow prediction methods often
depend on historical traffic data, including traffic volume and speed, but they may not be
well suited for high-capacity expressways or periods of peak traffic congestion [19].
In this study, a driving simulator is employed to create driving scenarios and examine
the driving performance of drivers with varying levels of experience in situations where
they are faced with traffic rule violations performed by other road users. The experimental
findings reveal that certain novice drivers disregard the positioning of their vehicles when
encountering traffic violations, resulting in collisions with other road users. Furthermore,
Sensors 2023, 23, 8753 6 of 15

some novice drivers can only execute either steering or braking to evade collisions in these
critical situations [20].
The obtained results show an average mean absolute percentage error improvement of
40.18% in the prediction accuracy by using the proposed preprocessing technique. Table 1
represents the year-wise key findings of different research focuses.

Table 1. Year-wise different research findings.

Paper Title Year Research Focus Key Finding


Provides a comprehensive
A survey of IoT-based
IoT-based smart overview of IoT-based
smart parking systems in 2019
parking systems. parking systems, their
smart cities [21].
components, and challenges.
Autonomous parking Discusses a deep
Deep reinforcement
with deep reinforcement learning
learning for autonomous 2020
reinforcement approach for
parking [22].
learning. autonomous parking.
Learning-based smart Intelligent detection Discusses convolution
2021
parking system [23]. of free parking slots. neural networks.
Autonomous detection of
parking lots with Deep convolutional Provides vision-based target
multi-sensor data fusion 2021 neural network detection and
using machine deep F-MTCNN. object classification.
learning techniques [24].
Autonomous parking space
detection for electric Discusses parking space
vehicles based on the 2023 Receptive field block. detection and coordinate
improved YOLOV5-OBB attention mechanism.
algorithm [25].

3. Solution Design and Implementation


Conceptual Description of the Solution
Smart City Operation Platform Ecology (SCOPE) is a model which focuses on the
provisioning of smart services and functions as a management system (Figure 2). It takes a
smart city as an ecosystem with various inhabitants having specific needs and demands.
One important component of this model is the controller that provides the learning algo-
rithm and starts the system in terms of initialization. In this research, we are using the same
module to identify parking. Three main modules used in this system include the initializa-
tion module, learning controller, and synthesizer. The initialization module serves as the
starting point of the system. It is responsible for preparing and processing the raw input
data. This module may involve tasks such as data preprocessing, which could include the
cleaning, normalization, and transformation of the data. The initialization module provides
preprocessing and tagging of datasets based on the properties required to formulate initial,
refined, and output datasets. The learning controller is a crucial part of the system, as it
manages the learning and decision-making processes. It likely includes machine learning
algorithms or other AI techniques to analyze and learn from the processed data which
it receives. The synthesizer is a separate component that plays a role in generating or
synthesizing outputs or results. It might take the refined data from the learning controller
and create meaningful outputs or solutions related to car parking.
Sensors 2023, 23, x FOR PEER REVIEW 7 of 16

Sensors 2023, 23, 8753 7 of 15


a role in generating or synthesizing outputs or results. It might take the refined data from
the learning controller and create meaningful outputs or solutions related to car parking.

Figure 2.
Figure 2. SCOPE
SCOPE preprocessing
preprocessing and
and learning.
learning.

There are
There are 12,417
12,417 labeled
labeledimages
imagesininthe thePKLot
PKLot[26]
[26]parking
parking dataset. TheThe
dataset. images
images in the
in
dataset
the cover
dataset different
cover kinds
different kindsof of
climate
climate conditions,
conditions,such
suchasasrainy,
rainy,sunny,
sunny, and overcast
and overcast
preprocessing,ininterms
preprocessing, termsofofthe
the initial
initial set,
set, noise
noise reduction,
reduction, andand elimination
elimination of other
of other impu-
impurities
Sensors 2023, 23, x FOR PEER REVIEW 3).8 These
of 16
rities to make a refined dataset for tagging and solution development periods (Figure
to make a refined dataset for tagging and solution development periods (Figure 3).
These images
images presentpresent
distinctdistinct
features features
becausebecause the dataset
the dataset has different
has different parkingparking
lots. lots.
This dataset is segmented into two classes: empty parking space class and the occu-
pied space class. The total number of images after segmentation is 695,899, of which
337,780 (48.54%) comprise empty parking space images and 358,119 (51.46%) comprise
occupied space images. Figure 4b (empty sub image) and 4c (occupied sub image) show
the segmented parking spaces.

Sunny Cloudy Rainy

Sunny Cloudy Rainy

Sunny Cloudy Rainy


Figure 3.
Figure 3. Various
Various parking
parking lots
lots with
with different
different weather
weather conditions.
conditions.
Sunny Cloudy Rainy

Sensors 2023, 23, 8753 8 of 15

This dataset is segmented into two classes: empty parking space class and the oc-
cupied space class. The total number of images after segmentation is 695,899, of which
337,780 (48.54%) comprise empty parking space images and 358,119 (51.46%) comprise
Sunny Cloudy
occupied space Rainy
images. Figure 4b (empty sub image) and 4c (occupied sub image) show
the segmented parking spaces.
Figure 3. Various parking lots with different weather conditions.

(a) (b) (c)


Figure
Figure 4.
4. (a)
(a) Parking
Parking lot
lot image,
image, (b)
(b) empty
empty sub
sub image,
image, and (c) occupied
and (c) sub image.
occupied sub image.

4.
4. Performance
Performance Evaluation
Evaluation of
of the
the System
System
4.1. Phase-I Using SCOPE with AlexNet
4.1. Phase-I Using SCOPE with AlexNet
This
This pre-training
pre-training transfer
transfer mechanism
mechanism allows
allows the
the CNN
CNN network’s
network’s parameters
parameters to to be
be
transferred from the natural imagery dataset to the car parking dataset. AlexNet is pre-
transferred from the natural imagery dataset to the car parking dataset. AlexNet is pre-
trained
trained onon1000
1000image
imageclasses
classesand thethe
and lastlast
layers of AlexNet
layers can be
of AlexNet canmodified
be modifiedaccording to
accord-
ing to our dataset. The input layer of AlexNet only accepts RGB images
our dataset. The input layer of AlexNet only accepts RGB images with a size of 227 × 227 with a size of
227 × 227 × 3, therefore, the images will be resized according to the input layer.
× 3, therefore, the images will be resized according to the input layer. Each layer in this Each layer
in this network
network (e.g., convolutional
(e.g., convolutional layer, pooling
layer, pooling layer) haslayer) has a different
a different filter
filter size andsize
hasand has
its own
its own stride. According to the pre-trained AlexNet, every convolutional layer ends with
a max pooling layer that will generate the greatest value based on a specified filter size.
Each convolutional layer visualizes the object features in the images, such as texture,
angle, and the edge of the target images. Figure 5 shows the training graph using a
customized AlexNet pre-trained network. This graph shows the accuracy of the training
and validation dataset using five epochs with a 0.0001 learning rate. There are 625 total
iterations and 125 iterations per epoch. Figure 6 shows the graph of the training process
of the loss and validation of the dataset using five epochs with a 0.0001 learning rate. The
number of iterations and losses are shown along the x-axis and y-axis, respectively.

4.2. Phase-II Using SCOPE with YOLO


YOLO-V3 is also used to detect empty and occupied parking lots in real time. YOLO-
V3 is a significantly better and faster than other techniques, such as R-CNN, and while
R-CNN can be considered faster, it requires a lot of computations and repetition of pro-
cesses [27]. On the other hand, YOLO-V3, as its name suggests, does all of its work in
just one scan. In simple words, YOLO-V3 uses convolutional neural networks for ob-
ject detection, and it is approximately six times faster than R-CNN. It can perform the
following tasks:
• Detect multiple objects in an image.
• Predict multiple classes.
• Identify the locations of objects in the image.
The YOLO training process is shown in Figure 7. Furthermore, during the training
process, each image will be resized by a width of 416, and height of 426 is also required in
9 of 16

stride.stride.
Sensors 2023, 23, x FOR PEER REVIEW According
According
stride. to theto
According pre-trained
the
to the AlexNet,
pre-trained
pre-trained everyevery
AlexNet,
AlexNet, convolutional
everyconvolutional layer layer
convolutional ends 9with
ends
layer 16awith
of ends max a max
with a ma
Net, every convolutional layer ends with a max
pooling layer
pooling that
layer will
that generate
will the
generate greatest
the value
greatest based
value on
based a specified
on
pooling layer that will generate the greatest value based on a specified filter size. a filter
specified size.
filter size.
est valueSensors
based2023,
on a23,specified
8753 filter size. Each convolutional
Each convolutional
Each layer visualizes
convolutional layerlayer the object
visualizes the the
visualizes features
object in
features
object thein
features images,
the such
images,
in the images, as texture,
such 9 as
such texture,
of 15as texture
the object features in the images, such as texture,
angle, and
angle,
stride. the
and
angle,
Accordingedge
and of
thetothe
edge
thethe
edge target
of the
of the
pre-trained images.
targettarget
AlexNet, Figure
images.
images.
every 5convolutional
Figureshows
Figure the
5 shows training
5 shows the the
layer graph
training
ends training
with using
graph a cus-
using
graph
a max a cus-
using a cus
Figure 5 shows the training graph using a cus-
tomized
pooling AlexNet
tomized
tomized thatpre-trained
layerAlexNet
AlexNet generate network.
will pre-trained the greatest
pre-trained Thisvalue
network.
network.graph Thisshows
Thisbased
graph athe
onshows
graph accuracy
the the
specified
shows ofsize.
accuracy theof
filteraccuracy training
the and
training
of the andan
training
his graph shows the accuracy of the training and
validationEach convolutional
dataset
validation
validation usingusing
dataset layer
five visualizes
epochs
fivefive with
epochs theawith
object afeatures
0.0001 learning
0.0001 in the images,
rate.
learning There such
rate.rate. as
areThere
There texture,
625aretotal itera-
625 625totaltotal
itera-
the angle,
configuration
and the file,dataset
edge which
of per
using
may be
the target images.
epochs
changed aswith
Figure needed.a 0.0001
56 shows
learning
Thetraining
the 6000 iterations
graph
are
(max_batches)
using a cus-
itera
h a 0.0001 learning rate. There are 625 total tionsitera-
and
tions 125
and
tions iterations
125
and iterations
125 epoch.
iterationsper perFigure
epoch.epoch. 6 shows
Figure
Figure the
shows
6 graph
showsthe of
graph
the the
graph training
of the
of process
training
the training of
process the of
process the
of th
are tomized
identified to predict
AlexNet two classes
pre-trained network. (empty
This parking slots and occupied parking slots),
e 6 shows the graph of the training process loss of thevalidation
and
loss and
loss of theof
validation
and validation dataset
the
of using
dataset
the dataset fivegraph
using fiveshows
epochs
using with
epochs
five
theawith
epochs
accuracy
0.0001 of thelearning
learning
a 0.0001
with a 0.0001
training
rate.
learning
and
The num-
rate.rate.
TheThenum- num
andvalidation
steps of dataset
andusing fiveiterations
epochs with
4800 and 5400 (asa 0.0001
per the learning
policy) rate. There
equal toare80% 625andtotal90%itera-of the
ve epochs with a 0.0001 learning rate. The ber ofnum-
iterations
ber of iterations
ber losses
of iterations
iterations andand are
losses shown
are are
losses along
shown shown the x-axis
along the the and
x-axis y-axis,
andand respectively.
y-axis, respectively.
tions and 125The
max_batches. trainingper epoch. Figure 6 shows
configuration file along
the graph
uses x-axis
a network sizey-axis,
of the training process
width respectively.
= of416 theand a
ng the x-axis and y-axis, respectively.
loss =
height and
416,validation
which means of the dataset usingimage
that every five epochs
will bewith a 0.0001
resized tolearning
the network’srate. Thesize num- during
ber ofand
training iterations and losses are shown along the x-axis and y-axis, respectively.
detection.

Figure
Figure 5.
5. The The trainingprocess
training process ofof
of accuracy
accuracy and and
validation of the proposed model with transfer learn-
Figure Figure
5. The 5. The training
training
Figure 5. The process
trainingprocess ofand
of accuracy
accuracy
process validation
and
accuracyvalidation
and of the
validationof proposed
of
thethe
validation the model
proposed
proposed
of with
model
model
proposed transfer
with
with
model learn-
transfer
transfer
with learn-
transfer learn
ing (AlexNet). The light blue curve ( ) represents training accuracy and its smooth training
validation of the proposed model with transfer ing (AlexNet).
learn-
learning The light
ing (AlexNet).
ing
(AlexNet). Theblue
(AlexNet).
The lightcurve
The
light light
blue (curve
blue
curve ( ( ) represents
curve
( ) training
) ) represents
represents accuracy
training
training
training and
accuracy itsand
accuracy
accuracy smooth
andandits training
its smooth
its smooth
smooth training
trainin
accuracy curve is shown using the dark blue curve ( ). Further, the black curve ( )
) represents training accuracy and its smooth accuracy
training
training curve
accuracy theis
accuracy
accuracy
represents shown
curvecurve
curve
training
isusing
is shown
isshown
shown
validation theofdark
using using
using
the blue
theproposed
dark curve
thedark
the blue
dark (curve
blue
blue
model. ( ().
curve
curve ( Further, ).the
). Further,
). black
the curve
Further,
Further, black
the (curve
theblack
black ( ()
curve
curve )
curve ( ). Further, the black curve ((represents the training
represents
))represents
represents thevalidation
the training
the training
training of the of
validation proposed
validation
validation the
of the model.
ofproposed
the proposed
proposed model.
model.
model.
osed model.

Sensors 2023, 23, x FOR PEER REVIEW 10 of 16

Figure 6.
Figure 6. The
The training
training process
process of
of loss
loss and
and validation
validation of of the
the proposed
proposed model
model with
with transfer
transfer learning
learning
(AlexNet). The
(AlexNet). The light
light orange
orange curve
curve (( )) represents
represents thethe number
number ofof losses
losses and
and its
its smooth
smooth loss
loss
curve is
curve is shown
shown using
using the
the dark
dark orange
orange line
line (( ).
). Lastly,
Lastly, the
the black
black curve
curve (( )) represents
represents
the loss
the loss validation
validation ofof the
the proposed
proposedmodel.
model.

4.2. Phase-II Using SCOPE with YOLO


YOLO-V3 is also used to detect empty and occupied parking lots in real time. YOLO-
V3 is a significantly better and faster than other techniques, such as R-CNN, and while R-
CNN can be considered faster, it requires a lot of computations and repetition of processes
[27]. On the other hand, YOLO-V3, as its name suggests, does all of its work in just one
Sensors 2023,
Sensors 23, 8753
2023, 23, x FOR PEER REVIEW 11
10 of 16
of 15

Figure 7. The
Figure 7. The training
training process
process of
of the
the proposed
proposed model
model using
using YOLOv3.
YOLOv3.

The learning rate (learning_rate = 0.001) is a hyperparameter that adjusts and controls
In this research, the performance of the proposed model is measured using accuracy,
the weights of the network. The learning rate needs to be high at the beginning of the
false negative rate (FNR), true positive rate (TPR), true negative rate (TNR), positive pre-
training process. Once you set the learning rate value, train the model, and wait for the
dictive value (PPV), negative predictive value (NPV), false positive rate (FPR), false dis-
learning rate to eventually decrease over time and enable the model to converge. The
covery rate (FDR), and F1-Score.
learning rate which decreases the policy is mentioned in the configuration file. The blue
The performance of the proposed model is evaluated based on the counts of valida-
curve in Figure 7 shows the training loss, and the red curve shows the mean average
tion records correctly and incorrectly predicted by the proposed trained model. The accu-
precision (mAP), which is 99.9%.
racy In
of this
the research,
proposedthe model provides of
performance the
theinformation
proposed modelabout ishow many images
measured are cor-
using accuracy,
rectly classified in the confusion matrix by using the trained proposed model,
false negative rate (FNR), true positive rate (TPR), true negative rate (TNR), positive as shown
in Equation (1).
predictive value (PPV), negative predictive value (NPV), false positive rate (FPR), false
discovery rate (FDR), and F1-Score. 𝜏 +𝜏
The performance of the proposed 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦
model is=evaluated based on the counts of validation (1)
𝑁
records correctly and incorrectly predicted by the proposed trained model. The accuracy
of theThe error rate
proposed or miss
model rate orthe
provides false negative rate
information (FNR)
about howofmany
the proposed model
images are is cal-
correctly
classified in the confusion matrix by using the trained proposed model, as shownare
culated using Equation (2), and provides the information about how many images in
incorrectly
Equation identified in the confusion matrix.
(1). 
τρ +ℱ τ+ν ℱ
Accuracy
𝑀𝑖𝑠𝑠 𝑟𝑎𝑡𝑒 == (2)
(1)
N 𝑁
The
The error
other rate or of
metric miss
therate or false
measure of negative
performance rate is
(FNR) of the or
sensitivity proposed
recall ormodel
the trueis
calculated using Equation (2), and provides the information
positive rate (TPR), and is calculated with Equation (3). about how many images are
incorrectly identified in the confusion matrix.
𝜏
𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 F = +F  (3)
ρ 𝜏 +ν ℱ
Miss rate = (2)
N
Sensors 2023, 23, 8753 11 of 15

The other metric of the measure of performance is sensitivity or recall or the true
positive rate (TPR), and is calculated with Equation (3).
τρ
Sensitivity =  (3)
τρ + Fν

One more performance measure metric which is used is specificity or the true negative
rate (TNR), and it is measured with Equation (4).
τν
Speci f icity =  (4)
τν + Fρ

The precision or positive predictive value of the proposed model is measured with
Equation (5).
τρ
Precision =  (5)
τρ + Fρ
Equation (6) is used to find out the negative predictive value (NPV) of the proposed
model.
τν
NPV = (6)
(τν + Fν )
The false positive rate (FPR) of fallout of the proposed model is measured with
Equation (7)

FPR =  (7)
τν + Fρ
Equation (8) represents the false discovery rate (FDR) of the proposed model.


FDR =  (8)
Fρ + τρ

F1-Score is the import metric used to evaluate the proposed model. It is based on
precision and recall, and is calculated by taking the geometric mean of recall and precision
as shown in Equation (9).

2 ∗ TPR ∗ PPV
F1 − Score = (9)
TPR + PPV
Table 2 shows the confusion matrix from the training phase of the proposed model
used for automated parking lot detection prediction. These metrics are applied in the
training and validation dataset, which is divided into 80% for the training dataset and 20%
for the validation dataset. In this study, 80% of the training data set is used for building
the proposed model and 20% of the dataset is used for measuring the proposed model’s
accuracy. The results of the training and validation dataset in the form of a confusion matrix
are shown in Tables 2–4. There are 20,000 randomly selected images from the parking lot
dataset which are used for transfer learning with the use of a customized AlexNet network,
in which 16,000 (80%) images are used for training and 4000 (20%) images are used to
validate the training model, as shown in the respective tables.

Table 2. Confusion matrix of the training of the proposed model during the prediction of automated
parking lot detection.

Expected Output (Ee , Eo ) Oe (Empty) Oo (Occupied) Total


Input Ee (Empty) 7991 9 8000
Eo (Occupied) 12 7988 8000
Total 8003 7997 16,000
Sensors 2023, 23, 8753 12 of 15

Table 3. Confusion matrix of the validation of the proposed model using AlexNet during prediction
of automated parking lot detection.

Expected Output (Ee , Eo ) Oe (Empty) Oo (Occupied) Total


Input Ee (Empty) 1997 3 2000
Eo (Occupied) 4 1996 2000
Total 2001 1999 4000

Table 4. Confusion matrix of the validation of the proposed model using YOLO during the prediction
of automated parking lot detection.

Expected Output (Ee , Eo ) Oe (Empty) Oo (Occupied) Total


Input Ee (Empty) 76,173 863 77,036
Eo (Occupied) 2027 68,022 70,049
Total 78,200 68,885 147,085

Evaluation of the proposed model (transfer learning with AlexNet) using training and
validation data, which employs various statistical measures for performance assessment as
shown in Table 5.

Table 5. Performance evaluation of proposed model (transfer learning with AlexNet) using training
and validation data with different statistical measures.

FNR TPR TNR PPV


Results Accuracy NPV FPR FDR F1-Score
Miss Rate Sensitivity Specificity Precision
0.9987 0.0013 0.9989 0.9985 0.9985 0.9989 0.0015 0.0015 0.9986
Training
(99.87%) (0.13%) (99.89%) (99.85%) (99.85%) (99.89%) (0.15%) (0.15%) (99.87%)
0.9973 0.0028 0.9970 0.9975 0.9975 0.9970 0.00250 0.00250 0.9972
Validation
(99.73%) (0.28%) (99.70%) (99.75%) (99.75%) (99.70%) (0.25%) (0.25%) (99.73%)

Evaluation of the proposed model (transfer learning with YOLO) using training and
validation data, which employs various statistical measures for performance assessment as
shown in Table 6.

Table 6. Performance evaluation of proposed model (transfer learning with YOLO) using validation
data with different statistical measures.

FNR TPR TNR PPV


Results Accuracy NPV FPR FDR F1-Score
Miss Rate Sensitivity Specificity Precision
0.9804 0.0196 0.9888 0.9711 0.9741 0.9875 0.02894 0.02592 0.9814
Validation
(98.04%) (1.96%) 98.88%) (97.11%) (97.41%) (98.75%) (2.89%) (2.59%) (98.14%)

The description of the confusion matrix is shown in Table 1, and the results are shown
below.

• True positive τρ = 7991; the model accurately classified 7991 images in the empty lot
class out of 8000 images.
• True negative τν = 7988; the model accurately classified 7988 images in the empty lot
class out of 8000 images.

• False positive Fρ = 12; consequently, the model mistakenly identified 12 images of
the occupied lot class as the empty lot class.
• False negative (Fν ) = 9; consequently, the model mistakenly identified 9 images of the
empty lot class as the occupied lot class.
A total of 7991 and 7988 images were correctly identified as the empty lot and occupied
lot classes, respectively.
Sensors 2023, 23, 8753 13 of 15

Tables 6 and 7 represent the proposed model’s measurements by using Equations (1)–(9).
The accuracy, FNR, TPR, TNR, PPV, NPV, FPR, FDR, and F1-Score metrics of training and
validation dataset of the proposed model are shown in Table 6. The rapid advancement
of intelligent connected technologies and cellular vehicle-to-everything communication
(C-V2X) presents new opportunities for addressing the challenges of connected auto-
mated vehicles (CAVs) at continuous signalized intersections, especially in the context of
ecodriving [28].

Table 7. The performance comparison of the proposed model with approaches in the literature.

Literature Training Validation


Accuracy (%) Miss Rate (%) Accuracy (%) Miss Rate (%)
Fabian (2013) [22] 96.40 3.60 96.2 3.80
Amato et al. (2018) [23] 96.36 3.64 96.1 3.90
Kashif et al. (2020) [24] 97.60 2.40 96.6 3.40
Proposed model (YOLO) 99.89 0.11 98.04 1.96
Proposed model (AlexNet) 99.87 0.13 99.73 0.27

5. Conclusions and Future Work


The emergence of smart cities has provided many challenges and requirements, in-
cluding the autonomous data capturing ability and analytical provision for the end user
and system to make decisions. For autonomous data capturing, IoT has become an ideal
domain that provides the integration of multiple devices to capture dynamic data; therefore,
the role of IoT, artificial intelligence, and analytics in the solutions related to smart cities is
always prominent. As mentioned earlier, SCOPE is a model used for the management of
the ecosystems of smart cities with cloud computing that gains autonomy using learning
mechanisms and analytics. This paper has presented the data processing and learning
components of SCOPE to validate two scenarios, i.e., by engaging AlexNet as the learning
controller and by replacing AlexNet with YOLO-V3. The selected scenario investigated in
this study is the identification of parking lot statuses, and, for this purpose, both models
performed successfully as the learning controllers and provided significant results. The
accuracy of AlexNet and YOLO models reached 99.87 and 99.89, respectively, while the
comparison of the other models with previous results, which were significant, was also
improved with the help of the proposed SCOPE model. It is important to note that the
SCOPE model was evaluated on multiple other object identification scenarios for smart
cities and has provided significant results in all scenarios. Future research should aim
to explore the seamless integration of autonomous parking solutions with smart traffic
management systems. This would involve real-time communication between vehicles and
traffic infrastructure to optimize both parking and traffic flow within smart cities.

Author Contributions: Conceptualization, T.A. (Turki Alghamdi) and Y.A.; Methodology, Q.A.;
Formal analysis, G.A.; Writing—original draft, T.A. (Tahir Alyas); Visualization, A.A. All authors
have read and agreed to the published version of the manuscript.
Funding: This research is supported by the Deanship of Scientific Research, Islamic University of
Madinah, KSA.
Institutional Review Board Statement: Not Applicable.
Informed Consent Statement: Not Applicable.
Data Availability Statement: The data used in this paper can be requested from the corresponding
authors upon request.
Acknowledgments: The authors participated in collecting the datasets and results in this study.
Conflicts of Interest: The authors declare no conflict of interest regarding the publication of
this work.
Sensors 2023, 23, 8753 14 of 15

References
1. Fang, Y.; Min, H.; Wu, X.; Wang, W.; Zhao, X.; Mao, G. On-Ramp Merging Strategies of Connected and Automated Vehicles
Considering Communication Delay. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15298–15312. [CrossRef]
2. Belissent, J. Getting clever about smart cities: New opportunities require new business models. Camb. Mass. 2010, 193, 244–277.
3. Draz, U.; Ali, T.; Khan, J.A.; Majid, M.; Yasin, S. Areal-time smart dumpsters monitoring and garbage collection system. In
Proceedings of the 2017 Fifth International Conference on Aerospace Science & Engineering (ICASE), Islamabad, Pakistan, 14–16
November 2017; IEEE: Piscataway, NJ, USA, 2017.
4. Yue, W.; Li, C.; Wang, S.; Xue, N.; Wu, J. Cooperative Incident Management in Mixed Traffic of CAVs and Human-Driven Vehicles.
IEEE Trans. Intell. Transp. Syst. 2023. [CrossRef]
5. Safi, Q.G.K.; Luo, S.; Pan, L.; Liu, W.; Hussain, R.; Bouk, S.H. SVPS: Cloud-based smart vehicle parking system over ubiquitous
VANETs. Comput. Netw. 2018, 138, 18–30. [CrossRef]
6. Paidi, V.; Fleyeh, H.; Håkansson, J.; Nyberg, R.G. Smart parking sensors, technologies and applications for open parking lots: A
review. IET Intell. Transp. Syst. 2018, 12, 735–741. [CrossRef]
7. Cai, B.Y.; Alvarez, R.; Sit, M.; Duarte, F.; Ratti, C. Deep Learning-Based Video System for Accurate and Real-Time Parking
Measurement. IEEE Internet Things J. 2019, 6, 7693–7701. [CrossRef]
8. Vu, H.T.; Huang, C.-C. Parking space status inference upon a deep CNN and multi-task contrastive network with spatial
transform. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 1194–1208. [CrossRef]
9. Zhang, L.; Huang, J.; Li, X.; Xiong, L. Vision-based parking-slot detection: A DCNN-based approach and a large-scale benchmark
dataset. IEEE Trans. Image Process. 2018, 27, 5350–5364. [CrossRef]
10. Chen, J.; Wang, Q.; Cheng, H.H.; Peng, W.; Xu, W. A Review of Vision-Based Traffic Semantic Understanding in ITSs. IEEE Trans.
Intell. Transp. Syst. 2022, 23, 19954–19979. [CrossRef]
11. Tekouabou, S.C.K.; Cherif, W.; Silkan, H. Improving parking availability prediction in smart cities with IoT and ensemble-based
model. J. King Saud Univ. Comput. Inf. Sci. 2020, 34, 687–697.
12. Luo, J.; Wang, G.; Li, G.; Pesce, G. Transport infrastructure connectivity and conflict resolution: A machine learning analysis.
Neural Comput. Appl. 2022, 34, 6585–6601. [CrossRef]
13. Orrie, O.; Silva, B.; Hancke, G.P. A Wireless Smart Parking System. In Proceedings of the 41st Annual Conference of the IEEE
Industrial Electronics Society (IECON), Yokohama, Japan, 9–12 November 2015.
14. Karthi, M.; Preethi, H. Smart Parking with Reservation in Cloud based environment. In Proceedings of the 2016 IEEE International
Conference on Cloud Computing in Emerging Markets, Bangalore, India, 19–21 October 2016; pp. 164–167.
15. Tabassum, N.; Namoun, A.; Alyas, T.; Tufail, A.; Taqi, M.; Kim, K.-H. Classification of Bugs in Cloud Computing Applications
Using Machine Learning Techniques. Appl. Sci. 2023, 13, 2880. [CrossRef]
16. Xu, J.; Guo, K.; Zhang, X.; Sun, P.Z.H. Left Gaze Bias between LHT and RHT: A Recommendation Strategy to Mitigate Human
Errors in Left- and Right-Hand Driving. IEEE Trans. Intell. Veh. 2023. [CrossRef]
17. Arora, D.; Garg, M.; Gupta, M. Diving deep in Deep Convolutional Neural Network. In Proceedings of the 2020 2nd International
Conference on Advances in Computing, Communication Control and Networking (ICACCCN), Greater Noida, India, 18–19
December 2020; pp. 749–751.
18. Ojagh, S.; Cauteruccio, F.; Terracina, G.; Liang, S.H. Enhanced air quality prediction by edge-based spatiotemporal data
preprocessing. Comput. Electr. Eng. 2021, 96 Pt B, 107572. [CrossRef]
19. Chen, J.; Xu, M.; Xu, W.; Li, D.; Peng, W.; Xu, H. A Flow Feedback Traffic Prediction Based on Visual Quantified Features. IEEE
Trans. Intell. Transp. Syst. 2023, 24, 10067–10075. [CrossRef]
20. Xu, J.; Guo, K.; Sun, P.Z.H. Driving Performance Under Violations of Traffic Rules: Novice Vs. Experienced Drivers. IEEE Trans.
Intell. Veh. 2022, 7, 908–917. [CrossRef]
21. Assim, M.; Al-Omary, A. A survey of IoT-based smart parking systems in smart cities. In Proceedings of the 3rd Smart Cities
Symposium (SCS 2020), Online Conference, 21–23 September 2020; pp. 35–38. [CrossRef]
22. Takehara, R.; Gonsalves, T. Autonomous Car Parking System using Deep Reinforcement Learning. In Proceedings of the 2021 2nd
International Conference on Innovative and Creative Information Technology (ICITech), Salatiga, Indonesia, 23–25 September
2021; pp. 85–89. [CrossRef]
23. Sajna, S.; Nair, R.R. Learning-Based Smart Parking System. In Proceedings of the International Conference on Computational Intelligence.
Algorithms for Intelligent Systems; Tiwari, R., Pavone, M.F., Ravindranathan Nair, R., Eds.; Springer: Singapore, 2023. [CrossRef]
24. Iqbal, K.; Abbas, S.; Khan, M.A.; Ather, A.; Khan, M.S.; Fatima, A.; Ahmad, G. Autonomous Parking-Lots Detection with
Multi-Sensor Data Fusion Using Machine Deep Learning Techniques. CMC-Comput. Mater. Contin. 2021, 66, 1595–1612.
[CrossRef]
25. Chen, Z.; Wang, X.; Zhang, W.; Yao, G.; Li, D.; Zeng, L. Autonomous Parking Space Detection for Electric Vehicles Based on
Improved YOLOV5-OBB Algorithm. World Electr. Veh. J. 2023, 14, 276. [CrossRef]
26. Almeida, P.; Oliveira, L.S.; Silva, E., Jr.; Britto, A., Jr.; Koerich, A. PKLot—A robust dataset for parking lot classification. Expert Syst.
Appl. 2015, 42, 4937–4949. Available online: https://www.kaggle.com/datasets/ammarnassanalhajali/pklot-dataset (accessed
on 1 October 2023). [CrossRef]
Sensors 2023, 23, 8753 15 of 15

27. Ma, X.; Dong, Z.; Quan, W.; Dong, Y.; Tan, Y. Real-time assessment of asphalt pavement moduli and traffic loads using monitoring
data from Built-in Sensors: Optimal sensor placement and identification algorithm. Mech. Syst. Signal Process. 2023, 187, 109930.
[CrossRef]
28. Zhang, X.; Fang, S.; Shen, Y.; Yuan, X.; Lu, Z. Hierarchical Velocity Optimization for Connected Automated Vehicles With Cellular
Vehicle-to-Everything Communication at Continuous Signalized Intersections. IEEE Trans. Intell. Transp. Syst. 2023. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like