Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Volume 11, Issue 3, March 2023 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Drone Movement Detection Network


using Raspberry Pi
T.S.R. Krishna Prasad, P. Mahesh Kiran, M. Hima Sameera, P. Sri Sai Nachiketha, M. Mukesh Vamsi.
Electronics and Communication Engineering
Seshadri Rao Gudlavalleru Engineering College Gudlavalleru, India

Abstract:- This research paper proposes a system for employing star network topology with Raspberry Pi as
detecting drones that use Raspberry Pi as its primary the central hub. A dedicated machine, with the machine
computing platform and implements the SSD learning model running on it, accesses the video feeds
MobileNetv2 architecture. The proposed approach from raspberry pi and infers them in real-time. The
involves training a machine learning model using deep detection results are sent to the raspberry pi. Computer
learning and convolutional neural network algorithms. vision techniques are applied to the region of interest in
The SSD Mobilenetv2 architecture is proposed due to the video feeds to determine the drone's trajectory. The
its accuracy and optimal performance in real-time system includes physical and digital alerts comprising
object detection. The dataset includes images of alarm systems and SMS alerts so that authorities can be
numerous drones in various positions. The dataset has informed immediately whenever a drone is detected.
undergone image augmentations such as flipping,
blurring, granulation and grayscale conversion, at Keywords:- Drones, Raspberry Pi, SSD MobileNetv2, real-
random, before training. Multiple cameras, connected time detection, star network topology, trajectory,SMS alerts
over a network, are connected to a Raspberry Pi

I. INTRODUCTION

Fig. 1: Bottle Neck Residual Block of MobileNetv2

IJISRT23MAR1123 www.ijisrt.com 1765


Volume 11, Issue 3, March 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig. 2: SSD Mobile Netv2 Architecture

Drones have become increasingly popular in recent Depthwise convolution layer performs a 3x3 depthwise
years, due to their usability and numerous applications channel filtering operations. The Projection layer performs
suitable for variety of fields. Drones have been integrated reduction of the number of tensor channels and dimensions.
into various fields such as, agriculture [1], search and
rescue [2] operations, aerial photography, forest fire The MobileNetv2 is used as a base for the SSD
detection [3], smoke detection [4], delivery services and architecture.The Single Shot Detector (SSD) is used for
many more. Drones are capable of performing inspection detection and classification. The SSD architecture is built
and mapping of topology of various terrains [5], geological upon a MobileNetv2 base, but the fully connected layers of
surveys [6] more efficient than humans. They are used in MobileNetv2 are discarded. This enables the model to run
situations where human intervention is hazardous such as, on devices with low resources and perform at optimum
inspection of volcanoes, inspection of oil rigs and power speed[8]. The SSD performs object localization and
plants etc. However, drones also pose a significant threat. classification in a single task. The Non-Maximum
As they are small and compact, they can perform damaging Suppression (NMS) is performed by using the bounding
activities while being hidden. Drones can be used to fly box regression technique Multibox[9]. Multibox uses
over restricted areas such as airports and government Inceptive convolutional network[10]. The loss function of
buildings and may acquire sensitive information. They can Multibox is comprised of location loss and confidence loss.
capture videos and images without the consent and The Multibox Intersection over Union (IOU) uses priors as
acknowledgement. Drones can also be an application for prediction results and regresses them attempting for a
illegal surveillance and smuggling, which can cause closer ground truth bounding boxes, resulting in non-
irreparable damage to individuals, organizations and maximum suppression.
countries. Terrorists have been using drones for spying and
obtaining sensitive information, leaking confidential II. CREATING THE MODEL
information and attacking public places and military A. Dataset
bases.Machine learning algorithms have become robust and The dataset contains 4032 images of drones. Region of
can perform efficiently under critical conditions. interest in each image is annotated. The annotations are
TensorFlow is a framework used to train advanced machine saved in “.xml” format. The images and their respective
learning models. TensorFlow has an Object Detection API annotations are divided into three parts before applying
for training object detection and classification models. It dataaugmentation techniques. The train dataset contains
has various models pre-trained on different algorithms such 3024 images, the test dataset contains 806 images and the
as CenterNet,EfficientDet, SSD MobileNet, SSD Resnet, valid dataset contains 202 images. The training dataset is
Faster-RCNN and Mask-RCNN. TensorFlow models pre-processed by applying gaussian blur, grayscale
require less computational resources and can perform with conversion, flip and granulation at random. The image
speed and accuracy. Due to its requirement for fewer augmentations are also applied to test dataset. The accuracy
resources, devices like embedded hardware and single- and false positive rate greatly depend on the quality of the
board computers can make use of TensorFlow models. SSD data set. The data augmentation process ensures the
MobileNetv2 is such an architecture.The MobileNetv2 training data to be reliable and results in a model with
contains three convolutional layers[7]. The Expansion higher degree of accuracy and lower false positive rate
layer’s function is to expand the number of channels in the [11].
input tensors, before Depthwise convolution is applied. The

IJISRT23MAR1123 www.ijisrt.com 1766


Volume 11, Issue 3, March 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig. 3: Intersection over Union calculation

B. Training techniques. Data augmentation is directly performed on the


The annotations are converted into a TensorFlow record dataset beforetraining. Weight decay adds a penalty term to
format. A label map containing one object id is created. the loss function. This makes the model adapt to smaller
The SSD MobileNetv2 architecture is configured and fine- weights, improving reliability. Dropout randomly drops
tuned to the obtained TensorFlow records. The some of the neural network units by setting them to zero.
configuration is defined for a single class, with a batch size This forces the model to adapt and be more robust. The
of 8 and number of steps of 2000. The lower batch size model is trained until a constant loss is achieved. Each
enables the model to work with lower resources, the epoch is saved as a checkpoint. The trained weights are
number of steps define the number of iterations for weights stored in a protocol buffer format. These weights are frozen
updating in every epoch of training. The model accuracy by converting them into a Graph Definition, essentially
and performance are greatly influenced by the combining the trained weights and meta data. The frozen
regularization techniques, which also prevents overfitting. weights and the configuration pipeline collectively
The SSD MobileNetv2 architecture uses Weight decay, represent the trained model. The weights are inferred on
Dropout, batch normalization and data augmentation both test and valid datasets.

Fig. 4: Process of creating the model

IJISRT23MAR1123 www.ijisrt.com 1767


Volume 11, Issue 3, March 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig. 5: Losses obtained during training

IJISRT23MAR1123 www.ijisrt.com 1768


Volume 11, Issue 3, March 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
III. DRONE DETECTION SYSTEM The access point for all these cameras is the raspberry pi.
The raspberry pi is configured to act as a hub for the
The proposed system consists of camera network, a cameras. The raspberry pi has two internal systems, a video
raspberry pi, a physical alarm system, an SMS API and an server and a trajectory determining system.
application for remote access. The camera network consists
of multiple cameras connected over a Wide Area Network.

Fig. 6: The proposed system

The video server uses RTSP protocol [12] for the subsequent frames differ. The shift in the centroid of
accessing the camera video feeds and FFMPEG protocol the contour in respective frames determines the drone
[13] for relaying the video feeds to the machine learning trajectory. The physical alarm systems are triggered by the
model. The dedicated machine runs an inference script, detection results. The SMS API sends an alert message
which uses multithreading and thread-pooling for the periodically to the users. The video feeds can be accessed
incoming video feeds. Each thread accesses the model remotely on the network through a flask application.
individually and sends the detection results, which
comprises of confidence scores, class names and bounding IV. RESULTS
boxes, to the raspberry pi. The detection results are given to
the trajectory determining system in the raspberry pi. The The output of the detection system is accessible
trajectory determining system creates its own region of through the application. The system was tested using three
interest based on the bounding boxes. These regions of cameras, a WIFI camera, a web camera and an USB
interest will undergo a motion detection algorithm [14], camera. The app displays the detection results of these
which involves applying the absolute difference between cameras in three cells. A fourth cell is used to display the
two subsequent video frames as a threshold mask to the trajectory and location information of the detected drone.
regions of interest. A binary version of the absolute The location information is based on the location of the
difference mask is used for detecting contours whenever camera.

IJISRT23MAR1123 www.ijisrt.com 1769


Volume 11, Issue 3, March 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
V. CONCLUSION

Fig. 7: The output of the proposed system

This study outlined the usage of machine learning 10.23919/MIPRO.2019.8756696.https://doi.org/10


models on low resource devices with optimal real-time .1016/j.compag.2022.107017.
performance. The proposed system is a low-cost security [4.] D. Kinaneva, G. Hristov, J. Raychev and P.
deviceintegrated with machine learning, with a wide range Zahariev, "Early Forest Fire Detection Using
of coverage, which is both reliable and efficient for Drones and Artificial Intelligence," 2019 42nd
operating in real-time. The application for monitoring International Convention on Information and
camera feeds and the SMS alerts makes the system reliable Communication Technology, Electronics and
in situations where time is critical.
Microelectronics (MIPRO), Opatija, Croatia,
REFERENCES 2019, pp. 1060-1065, doi:
10.23919/MIPRO.2019.8756696.
[1.] Abderahman Rejeb, Alireza Abdollahi, Karim [5.] Awasthi, S., Balusamy, B., Porkodi, V. (2020).
Rejeb, Horst Treiblmaier, Drones in agriculture: A Artificial Intelligence Supervised Swarm UAVs
review and bibliometric analysis,Computers and for Reconnaissance. In: Batra, U., Roy, N., Panda,
Electronics in Agriculture Volume B. (eds) Data Science and Analytics. REDSET
198,2022,107017,ISSN 0168-1699 2019. Communications in Computer and
[2.] Sven Mayer, Lars Lischke, Paweł W. Woźniak. Information Science, vol 1229. Springer,
Drones for Search and Rescue. 1st International Singapore.
Workshop on Human-Drone Interaction, Ecole [6.] W. Budiharto, A. Chowanda, A. A. S. Gunawan,
Nationale de l'Aviation Civile [ENAC], May E. Irwansyah and J. S. Suroso, "A Review and
2019, Glasgow, United Kingdom. ⟨hal-02128385⟩ Progress of Research on Autonomous Drone in
[3.] D. Kinaneva, G. Hristov, J. Raychev and P. Agriculture, Delivering Items and Geographical
Zahariev, "Early Forest Fire Detection Using Information Systems (GIS)," 2019 2nd World
Drones and Artificial Intelligence," 2019 42nd Symposium on Communication Engineering
International Convention on Information and (WSCE), Nagoya, Japan, 2019, pp. 205-209, doi:
Communication Technology, Electronics and 10.1109/WSCE49000.2019.9041004.
Microelectronics (MIPRO), Opatija, Croatia, [7.] M. T. Topalli, M. Yilmaz and M. F. Çorapsiz,
2019, pp. 1060-1065, doi: "Real Time Implementation of Drone Detection
using TensorFlow and MobileNetV2-SSD," 2021

IJISRT23MAR1123 www.ijisrt.com 1770


Volume 11, Issue 3, March 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
7th International Conference on Electrical,
Electronics and Information Engineering
(ICEEIE), Malang, Indonesia, 2021, pp. 436-439,
doi: 10.1109/ICEEIE52663.2021.9616846.
[8.] Foehn, P., Brescianini, D., Kaufmann, E. et
al. AlphaPilot: autonomous drone racing. Auton
Robot 46, 307–320 (2022).
https://doi.org/10.1007/s10514-021-10011-y
[9.] M. Saqib et al., "Real-Time Drone Surveillance
and Population Estimation of Marine Animals
from Aerial Imagery," 2018 International
Conference on Image and Vision Computing New
Zealand (IVCNZ), Auckland, New Zealand, 2018,
pp. 1-6, doi: 10.1109/IVCNZ.2018.8634661.
[10.] Q. Nguyen, H. T. Nguyen, V. C. Tran, H. X.
Pham and J. Pestana, "A Visual Real-time Fire
Detection using Single Shot MultiBox Detector
for UAV-based Fire Surveillance," 2020 IEEE
Eighth International Conference on
Communications and Electronics (ICCE), Phu
Quoc Island, Vietnam, 2021, pp. 338-343, doi:
10.1109/ICCE48956.2021.9352080.
[11.] Di Puglia Pugliese, L., Guerriero, F. & Scutellá,
M.G. The Last-Mile Delivery Process with Trucks
and Drones Under Uncertain Energy
Consumption. J Optim Theory Appl 191, 31–67
(2021). https://doi.org/10.1007/s10957-021-
01918-8
[12.] Lakew Yihunie, F., Singh, A.K., Bhatia, S. (2020).
Assessing and Exploiting Security Vulnerabilities
of Unmanned Aerial Vehicles. In: Somani, A.K.,
Shekhawat, R.S., Mundra, A., Srivastava, S.,
Verma, V.K. (eds) Smart Systems and IoT:
Innovations in Computing. Smart Innovation,
Systems and Technologies, vol 141. Springer,
Singapore.
[13.] Erazo, E. Tayupanta and S. -B. Ko, "Epipolar
Geometry on Drones Cameras for Swarm
Robotics Applications," 2020 IEEE International
Symposium on Circuits and Systems (ISCAS),
Seville, Spain, 2020, pp. 1-5, doi:
10.1109/ISCAS45731.2020.9180981.
[14.] S. -P. Yong, A. L. W. Chung, W. K. Yeap and P.
Sallis, "Motion Detection Using Drone's
Vision," 2017 Asia Modelling Symposium
(AMS), Kota Kinabalu, Malaysia, 2017, pp. 108-
112, doi: 10.1109/AMS.2017.25.

IJISRT23MAR1123 www.ijisrt.com 1771

You might also like