Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Optical Fiber Technology 57 (2020) 102251

Contents lists available at ScienceDirect

Optical Fiber Technology


journal homepage: www.elsevier.com/locate/yofte

Alarm classification prediction based on cross-layer artificial intelligence T


interaction in self-optimized optical networks (SOON)

Bing Zhanga, Yongli Zhaoa, , Sabidur Rahmanb, Yajie Lia,b, Jie Zhanga
a
State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
b
University of California, Davis, CA 95616, USA

A R T I C LE I N FO A B S T R A C T

Keywords: Alarm prediction in optical networks focuses on forecasting network failure from the state of equipment and
Optical networks links. The existing prediction methods usually rely on large amounts of data, while centralizing all processes in
Distributed system the network controller or management system may increase the system burden. In this paper, a novel method is
Alarm prediction proposed in self-optimized optical networks (SOON) to implement alarm classification prediction based on cross-
cross-layer AI
layer artificial intelligence (AI) architecture. We adopts alarm risk assessment and data augmentation with
synthetic minority oversampling technique (SMOTE). As a distributed system, cross-layer AI completes de-
composed functions by using interactions between different AI engines. With the help of the controller system,
functions can be executed in order. The amount of data required for prediction is far less than other methods.
The validity of the method is proved using the collected data from a commercial synchronous digital hierarchy
(SDH) network. Experimental results show that promising precision (95%) can be achieved in predicting the
optical equipment alarms.

1. Introduction management systems. The main problems addressed in this paper are
the evaluation of alarms, virtual reconstruction of the network and
The emerging 5G technology and a large number of content-based development of tools to overcome the interoperability issues. The mo-
applications bring unique connectivity challenges in terms of required tivation behind this work is to assist human operators and minimize the
bandwidth and resource utilization, etc. Optical networks that support cost of the alarm evaluation process. Ref. [4] uses ML as an instrument
the bulk of this transformation continue to grow in architectural com- to address network assurance via dynamic data-driven operation. A
plexity and heterogeneity. Optical networks require high availability cognitive failure detection architecture for intelligent network assur-
and reliable operation to continuously serve the end-to-end services. ance in a software-defined network (SDN) controller is proposed and
Traditional protective schemes require a lot of investment at multiple demonstrated based on real-world failure examples. The framework
layers and under-utilization of available network resources. Even with detects and identifies significant failures, and outperforms conventional
protective measures, network failures still always happen, which are fixed threshold-triggered operations, both in terms of detection preci-
typically caused by underlying network equipment and optical trans- sion and proactive reaction time.
mission lines. [1]. Artificial Intelligence (AI) (specifically, supervised Ref. [5] proposes a performance monitoring and failure prediction
machine learning (ML) in this study) algorithms can learn system be- method in optical networks. The primary algorithms of this method are
havior from the past data and estimate the future responses based on the support vector machine (SVM) and the double exponential
the learned system model. These ML techniques can be used as im- smoothing (DES). The proposed protection plan primarily investigates
portant support tools for alarm prediction in optical networks. how to predict the risk of an equipment failure. Experimental results
A method of alarm pre-processing and correlation analysis for op- show that the average prediction precision of our method is 95% when
tical transport networks is proposed, the results which show this predicting the optical equipment failure state. Ref. [6] shows that most
method is promising for trivial alarm identifying, chain alarm mining, network failures can be predicted by analyzing previous network run-
and root failure locating in existing optical networks [2]. The main ning logs, using 14 months' network alarm logs from a metropolitan
topic of Ref. [3] is failure prediction from large amounts of alarm re- area network. Preliminary research results show that most network
cords stored in different databases of non-cooperating network failures can be predicted by analyzing previous network running logs.


Corresponding author.
E-mail address: [email protected] (Y. Zhao).

https://doi.org/10.1016/j.yofte.2020.102251
Received 1 March 2020; Received in revised form 15 April 2020; Accepted 17 April 2020
1068-5200/ © 2020 Elsevier Inc. All rights reserved.
B. Zhang, et al. Optical Fiber Technology 57 (2020) 102251

This method can detect failures in practical application on an early- new failure disposal measures appear in the network. 4) Database: the
stage and reduce unnecessary economic losses. database stores network state parameters and models trained by the
At present, optical networks are mainly under centralized control. If central AI. Real-time data interaction occurs between the database and
all the functions, e.g., data processing, model training, and policy other modules. 5) SDN controller: The SDN controller monitors net-
scheduling are performed in the control plane, the controller will suffer work status, coordinates each module and executes the disposal policy.
a seriously heavy burden [7]. Therefore, it is a feasible way to introduce After dividing logically or physically dispersed devices in the data
AI engine to devices in the data plane to share the pressure of the plane into domains, the device AI and the center AI can deal with intra-
controller. The AI engines can be deployed as embedded modules. In domain and inter-domain issues with the assistance of the SDN con-
our previous study [8], coordination between AI engines in the control troller. The CDM provides the network failure disposal strategy ac-
plane and device AI embedded onboard is proposed. Device AI, as the cording to the content in the knowledge base [13]. The SDN controller
name suggests, is an AI engine embed on a device board, which can be guides the strategy implementation in the data plane. Different appli-
deployed in a cabinet with other communications devices. Device AI is cations can be customized according to the requirements.
proposed based on edge computing to support various AI applications.
The distributed AI engines in devices are used to complete the process 3. Alarm classification prediction by Cross-layer AI
of data processing in the data plane, which can greatly reduce the
burden of the controller. But in terms of functional diversity and de- The alarms of optical network equipment are closely related to the
ployment flexibility, single device AI is limited. Therefore, we propose equipment states. Equipment states can be quantified by the features
the concept of cross-layer AI. Cross-layer AI is essentially a distributed collected by the network management system. The features are re-
AI, which is based on the three-layer architecture of SDN. It consists of presented by the physical parameters of the equipment such as optical
central AI in the control plane and several device AI engines in the data power, laser current, environmental temperature, power consumption,
plane. Driven by network monitoring data and assisted by AI algo- and other parameters. In general, these data will be recorded in the
rithms, network maintenance, and optimization functions are per- network management logs. The system periodically collects network
formed. performance and alarm parameters to evaluate the network status. The
In this paper, to predict the alarm, we apply an ML-based classifier problem is the high cost for the information collecting and processing,
to estimate whether a particular device alarm will occur, using features especially when the network size is large and information to be col-
such as the input optical power, laser bias current, etc. Compared to our lected is too much. Based on SOON, cross-layer AI is proposed to op-
previous work, the number of AI board is increased to form a one-to- timize performance and cost by combining advantages of central AI
many cooperation mode. Data augmentation, model prediction, and functions in the control plane (mass data processing and long data
other functions are realized through the deployment of device AI. With storage cycles) and device AI functions in the equipment (such as fast/
a favorable accuracy, the amount of data used in this paper is greatly low-latency data processing and filtering). An alarm prediction method
reduced. Monitoring records in an SDH network are used to prove the is designed based on cross-layer AI, which includes five steps as follows.
validity of the method. Under the cross-layer AI architecture, system
functions such as data preprocess and data augmentation can be im- 3.1. Step A: Data collection and pre-processing
plemented by relevant device AI engines. Then a three-stage approach
is proposed for alarm prediction based on cross-layer AI interaction. Data collection and pre-processing are implemented by device AI in
the equipment. First, to ensure the quality of data, the data points with
• Benefiting from the architecture, alarm risk assessment and data missing feature values are deleted. Then, we use feature selection to
augmentation using SMOTE are expediently performed. reduce data dimensions and the difficulty of learning tasks. Next, we
• Experimental results indicate that the redundant data is effectively identify several features with the highest contribution degree, using the
filtered out and the class-imbalance problem is solved. RF algorithm. A contribution degree is a method of RF algorithm to
• In this context, the predicted results of random forest (RF) algorithm measure the importance of variables. This method is used to screen out
are analyzed from the aspects of precision and validity, compared important features in prediction.
with the outcomes of SVM [9] and Adaboost [10]. Labels such as “alarm” and “normal” cannot be used for supervised
ML. Hence, we convert them to numeric values. Finally, we generate
The rest of the paper is organized as follows: Section II describes the the training and testing dataset by matching the performance data
architecture of SOON. Alarm classification prediction by cross-layer AI (input) and ‘alarm status’ (output) according to the time and node
interaction is described in Section III. In Section IV, the experimental name. The results are sent to the controller for the next step.
setup and results are shown. Finally, Section V concludes the paper.
3.2. Step B: Alarm risk assessment
2. SOON architecture
Alarm risk assessment is performed in the controller using the data
Our previous study [11] proposed an operation, administration, and processed by device AI. One root alarm may cause many subsequent
maintenance (OAM)-oriented optical network architecture, i.e., self- alarms. When the root alarm is fixed, the derivative alarms will dis-
optimized optical networks (SOON), to improve the intelligence and appear. Hence, dealing with high-ranking alarms is often sufficient for
automation of network maintenance management. As shown in Fig. 1, daily network maintenance, by filtering out alarms of low importance.
the architecture of SOON consists of central AI in the control plane, According to the equipment vendor’s official documentation, alarms
device AI in the data plane, cognitive decision management (CDM) in optical networks can be divided into four levels: emergency, primary,
module, database, and SDN controller [12]. The functions are described secondary, and prompt. The emergency alarm is generally caused by a
as follows. 1) Central AI: the central AI conducts model training ac- broken fiber or laser malfunction, which may give rise to the inter-
cording to the requirement of strong computing power. It helps to ruption of communication between a pair of nodes. The primary alarm
handle inter-domain issues. 2) Device AI: the device AI performs data is mostly caused by the high-step alarm and may affect one side of the
extraction and cleaning, and executes models trained by center AI. transceiver service. The secondary alarm is almost error code and de-
Device AI can be deployed in any node in the domain. Its functions can terioration alarm affecting a single service. The prompt alarm is mainly
be equipped according to the requirements. 3) CDM: the CDM gen- a remote warning indicator, which has little impact on the service.
erates a failure disposal strategy by matching the network state and During the monitoring period, multiple alarms may originate from
knowledge database. The knowledge database can be updated when the network. The useful information often gets buried under repeated

2
B. Zhang, et al. Optical Fiber Technology 57 (2020) 102251

Fig. 1. The architecture and functional model of SOON.

and useless alarms. To effectively evaluate the impact of alarms, we value exceeds the threshold, it means that the probability of network
propose the concept of ‘alarm risk', which can be evaluated according to alarm is high. These filtered alarms are directly related to network
the frequency and severity level of alarm within a period. R, P, and W failures. To ensure the quality of model training, it is necessary to set a
represent the alarm risk value, probability of alarm occurrence and threshold of alarm occurrence (N). If the number of data points is below
effect weight respectively. the threshold, the data of alarm would not be used to train models.
According to the experience of the operator and the statistics of alarm
R=P∗W (1)
frequency, the threshold of risk value M and alarm occurrence times N
The probability of alarm occurrence (P) is given by the number of are set as 0.6 and 600, respectively.
such alarms divided by the total number of alarms of this level. The
severity level is based on the alarm effect weight (W). Referring to prior 3.3. Step C: Data augmentation
study [14] and consulting the network operator, we assigned values
(W) to the four-alarm levels based on the frequency and influence in the Data augmentation is executed by another idle device AI in the data
network. The effect weights are set to 6.64, 2.66, 0.66 and 0.04, re- plane. Compared to the normal network operation parameters, the
spectively. number of alarms in optical networks is a small fraction. For example,
As shown in Fig. 2, there are 12190, 19933, 6584, and 10,662 we collect 1,535,901 performance data in 68 days and only 49,369 of
alarms at each level (in 68 days). Among them, input power-related them are alarm items. Such unbalanced samples often lead to over-fit-
(IN_PWR) alarms (IN_PWR_FAIL, IN_PWR_HIGH, IN_PWR_LOW) occurs ting, i.e., samples are always divided into the class that has a large size.
1406 times, AIS alarms (TU_AIS, AU_AIS, MS_AIS) occurs 904 times, and Data augmentation is an effective way to improve class imbalance.
RDI alarms (HP_RDI, LP_RDI) occurs 560 times. Using Eq. (1), the alarm In this study, we use SMOTE [15] for data augmentation. SMOTE al-
1406
risk value of IN_PWR is calculated as follows: 6.64 × 12190 = 0.766. Si- gorithm analyzes the small number of samples and adds new samples to
milarly, the other two types (AIS, RDI) are 0.121 and 0.056, respec- the dataset based on the analysis. We use the function in the deep
tively. learning framework to generate two types of data. The ratio of the two
Besides, a threshold of risk value M can be set. If the alarm risk types is 1:9, with 80 in total. The data is distributed in the interval (-2,

Fig. 2. Example of alarm risk assessment: calculating the 'alarm risk' value of the 3 most frequent types of alarms: input power (IN_PWR), alarm indicator signal (AIS)
and remote defect indicator (RDI).

3
B. Zhang, et al. Optical Fiber Technology 57 (2020) 102251

Fig. 3. Data augmentation using SMOTE (a) SVM, (b) RF.

4). Fig. 3 shows the influence of data augmentation under different excellent performance on small datasets. RF algorithm uses multiple
algorithms (SVM and RF).In Fig. 3 (a), the number of minority samples decision trees with the idea of ensemble learning. For one input sample,
(dark dots) increases significantly after data augmentation (right), N decision trees will have N categories. The RF integrates all the clas-
compared to prior data augmentation (left). Fig. 3 (b) shows a similar sification voting results and specifies the category with the most votes
effect. Besides, the boundary of classification becomes much clearer as the final output. RF can give insights of important features in the
after applying SMOTE, which means the classification precision can be classification.
improved significantly. Hyperparameters, i.e., parameters which are set before the learning
process, are set according to experience. Hyperparameters adjustment
3.4. Step D: Algorithm selection and model training is needed to find the model with the best generalization performance.
Then cross-validation is used to test the classification precision.
The augmented data is transmitted to central AI in the control plane
for model training. Depending on the application scenario, data type, 3.5. Step E: Alarm prediction
and quantity, we need to choose an appropriate algorithm. For ex-
ample, APRIORI [16] and LSTM [17] algorithms are suitable for time The trained model is sent to the device AI to perform the prediction.
series data. SMOTE algorithm is suitable for scenarios where class im- Alarm prediction can be converted to a binary classification problem
balance occurs. Here, the RF algorithm is selected because of its (Alarm and Normal). The supervised learning model analyzes training

4
B. Zhang, et al. Optical Fiber Technology 57 (2020) 102251

Table 1
Performance Sheet.
DP-8020 Jetson-Nano

Memory 2 GB 4 GB
Storage 8 GB 16 GB
Computing Power 495 GFLOPS 472 GFLOPS
Compatibility Low High
Power 15 W 5W

AI, while data preprocessing, data augmentation and alarm prediction


are performed by device AI, respectively. After the results are returned,
Fig. 4. Alarm prediction method. the disposal strategy is generated and issued by the controller.

data and learns the relationship between indicators and labels ac-
4. Experimental setup and results
cording to the collected data. Fig. 4 shows the alarm prediction struc-
ture method. The training set and the test set have the same data dis-
To verify the prediction scheme based on cross-layer AI proposed in
tribution. For example, each item of data is composed of several
this paper, an experiment platform is set up. Two embedded devices,
features and a target (label). After extracting features from the training
i.e., NVIDIA Jetson-Nano, are used to simulate the AI engine. The off-
set, the ML algorithm can determine the alarm state on the test set.
line data is collected from the realistic network. Compared with the AI
board (DP8020) used in Ref.[8], Jetson-Nano is compatible with mul-
3.6. Step F: Alarm disposal strategy generation tiple AI frameworks, which is easier to be deployed. Besides, Jetson-
Nano could provide 472 GFLOPS(billion floating-point operations per
After obtaining the prediction results, the CDM searches the data- second) and consume only 5 W of power. Table 1 shows the comparison
base and generates an alarm disposal strategy. SDN controller controls between DP8020 and Jetson-Nano.
each node to make parameter adjustments and update the status of The performance of the proposed method is verified on a commer-
network equipment. cial SDH network. In the physical plane, SDH nodes constitute a mesh
The central AI in the control plane can make full use of a graphics topology. In the control plane, a central controller collects the operation
processing unit (GPU) to increase its computing power. It can interact and maintenance data from all nodes and then analyzes the data before
with the controller and exchange data with other modules. Various AI providing instructions. We envision that the controller will be able to
platforms, such as TensorFlow, can be used for offline model training. identify a potential equipment alarm (using the proposed prediction
Device AI, like edge computing does, is enhanced for AI processing by method) and will initiate protection measures in advance (e.g., sending
embedding the device. The device AI can obtain the operation para- a message to the operator, execute automated recovery, etc.)
meters of the device. With the limitation of volume and computing Each instance is composed of an input vector (made up of in-
power, device AI can execute the model and pass back the results rather dicators) and an expected output value. Using feature engineering, we
than train the model. In practical applications, tasks can be decomposed identify four indicators with the greatest impact on classification pre-
and executed separately by multiple device AI engines. With the co- cision. The indicators are input optical power (IOP), laser bias current
ordination of central AI and device AI, the output of the device AI and (LBC), laser operating temperature (LOT), and output optical power
the decision-making function of the controller are utilized to realize the (OOP). Each of them includes the maximum, minimum, and average
cross-layer AI interaction. Fig. 5 shows the procedures of cross-layer AI values of the given day. Therefore, there are 12 features in each sample.
interaction for alarm prediction. Model training is performed by central According to the principle of supervised learning, each data sample is

Fig. 5. Procedures of cross-layer AI interaction for alarm prediction scheme.

5
B. Zhang, et al. Optical Fiber Technology 57 (2020) 102251

Fig. 6. Experimental testbed and demonstration setup.

composed of feature vector × and class labels. In × = (x1,x2,…,x12), characteristic curve (ROC) as evaluation metrics. Precision is the most
x1-x3 corresponds to the three values (i.e. the maximum, minimum, common evaluation index, which indicates the ratio of the correctly
and average value of the given day) of OOP, x4-x6 corresponds to the classified sample number to the total sample number. However, in the
three values of LBC, and so on. For each vector ×, its corresponding case of an imbalance between samples, precision can be a flawed eva-
class label is ‘1′ if the alarm would occur. Otherwise, ‘0′ is normal. luation index. The area under the curve (AUC) can be an effective
The data used for alarm prediction is constituted of equipment supplement to the precision index. By definition, AUC can be obtained
monitoring data collected over 68 days in an SDH network. 1468 in- by summing the area of each part under the ROC curve. The calculation
stances of power-related data are collected. The time unit is a day. of AUC considers the classifier's classification ability for positive and
Among them, 99 instances have the label of ‘1′ and 1369 instances are negative examples. It can make a reasonable evaluation of the classifier
‘0′, in which there is an obvious class imbalance. After data over- in the case of imbalanced samples. The classifier with a larger AUC
sampling, the data with labels ‘1′ increases to 1369 and the total data value usually has a higher precision rate.
becomes 2738. As a result, the number of normal data and alarm data Fig. 9 (a) shows the comparison of prediction precision before and
are balanced. All the data are divided into the training set and test set after data augmentation. The precision reported is output from the RF
according to the ratio of 7:3. algorithm. Prediction without data augmentation gets higher precision
To conduct an experimental evaluation of the proposed scheme, we (95%). After data augmentation, the result is even lower (92%). The
have built an experimental platform (see Fig. 6). The platform is based main reason for this higher precision (in 'without data augmentation') is
on SOON and NVIDIA Jetson-Nano development boards. Two Jetson- that the true positive rate is too high since alarm samples have very few
Nanos are deployed as device AI. They interact with the controller labels “l” representing the alarm. The trained model does not have good
through the WebSocket protocol. Jetson Nanos can run multiple trained generalization ability, as illustrated by the AUC calculation in Fig. 9 (b).
models and deliver the results. The SOON platform is implemented on a In Fig. 9 (b), the performance of the model ‘with data augmentation’
computer with Ubuntu 16.04 operating system, Intel (R) Core (TM) is significantly better than that of ‘without data augmentation’ (AUC
processor i5-8250U with 8 GB main memory. Fig. 7 shows the front-end 0.95 > 0.59 ). Meanwhile, the value of AUC indicates that although the
graphical user interface (GUI) of the testbed. We can perform data data ‘without augmentation’ has higher precision, its model perfor-
processing, model parameter configuration, and result presentation mance is poor.
using this GUI. We can also adjust the configuration parameters (e.g., Fig. 10 shows the prediction precision of multiple algorithms is
epoch and learning rate) as required, to get proper results. The training compared with 'data augmentation'. Adaboost algorithm and SVM al-
time of the model without original data is 0.7550327 s, comparing with gorithms are used as the baseline. The prediction precision of Adaboost
0.8581556 s after data augmentation. and RF algorithm can reach more than 91%. The SVM estimator trained
Choosing the features with the greatest impact on the prediction on the same dataset provides 86% classification precision (compared
results can reduce data in the model. After evaluating the contribution, with 95% in Ref. [5]) on the training set, and a little bit lower for the
key features can be screened out. As a kind of ensemble learning al- test set. Because the ensemble learning (Adaboost, RF) gathers many
gorithm, the RF algorithm can evaluate the contribution of features. classification methods together and the precision of classification is
This method measures the influence of each feature on the accuracy of significantly improved. These prediction results indicate that the ML
the model prediction. The basic idea is to rearrange the order of a algorithms can achieve high precision.
certain column of eigenvalues and observe how much Table 2 shows the relationship between prediction precision and
model accuracy is reduced. For unimportant features, this method data volume. From the comparison, it can be found that when con-
has little effect on accuracy. However it can greatly reduce the accuracy siderable prediction precision is obtained (95%), the method used in
of the model for important features. Fig. 8 shows the contribution of the this paper leverages the least amount of data (1468). It shows that the
key features in the prediction precision. The results are obtained from use of cross-layer AI for alarm prediction can greatly reduce the amount
the collected training data. Among the features, the average output of data required.
optical power (OOP) has the highest impact on the prediction outcome.
We use the prediction precision and receiver operating

6
B. Zhang, et al. Optical Fiber Technology 57 (2020) 102251

Fig. 7. The front-end graphical user interface: (a) the function selection, (b) model parameter adjustment, and (c) WebSocket message sequence.

5. Conclusion CRediT authorship contribution statement

To reduce the burden on the controller during alarm prediction, this Bing Zhang: Conceptualization, Software, Writing - original draft.
paper proposes a novel alarm prediction method based on cross-layer Yongli Zhao: Writing - review & editing. : . Sabidur Rahman:
AI, which extends SOON system functions by deploying device AI in the Conceptualization, Methodology. Yajie Li: Writing - review & editing.
data plane. Cross-layer AI can perform task decomposition and multi- Jie Zhang: Supervision.
party cooperation to improve processing and system management ef-
ficiency. Equipment data collected from the network management
system in a commercial SDH network is used to prove the validity of the Declaration of Competing Interest
method. A testbed has been built to evaluate the performance of the
proposed method. Experimental results show that the alarm prediction The authors declare that they have no known competing financial
method has high precision. In future work, a variety of network ap- interests or personal relationships that could have appeared to influ-
plications based on cross-layer AI will be studied. Besides, there are ence the work reported in this paper.
some open issues, such as algorithm selection in different scenarios and
the determination of super parameters in model training. At present,
Acknowledgment
the results of prediction depend on the data quality. How to maintain
the accuracy of prediction in the case of insufficient data is also an
This work has been supported in part by the National Natural
important problem.
Science Foundation of China (NSFC) (61822105), the Fundamental
Research Funds for the Central Universities (2019XD-A05), State Key
Laboratory of Information Photonics and Optical Communications of
China (IPOC2019ZR01).

Fig. 8. Contribution of features in prediction outcome.

7
B. Zhang, et al. Optical Fiber Technology 57 (2020) 102251

Fig. 9. Comparison of prediction precision before and after data augmentation (a) the learning curve and (b) ROC curve.

techniques in optical networks, IEEE Commun. Surv. Tutorials 21 (2) (2018)


1383–1408.
[2] Danshi Wang, et al., Dealing with alarms in optical networks using an intelligent
system, IEEE Access 7 (2019) 97760–97770.
[3] Jaudet, Mohammad, Nacem Iqbal, and Amir Hussain. “Neural networks for fault-
prediction in a telecommunications network.” 8th International Multitopic
Conference, 2004. Proceedings of INMIC 2004.. IEEE, 2004.
[4] Danish Rafique, et al., Cognitive assurance architecture for optical network fault
management, J. Lightwave Technol. 36 (7) (2018) 1443–1450.
[5] Zhilong Wang, et al., Failure prediction using machine learning and time series in
optical network, Opt. Express 25 (16) (2017) 18553–18565.
[6] Zhong, Jiang, Weili Guo, and Zhenhua Wang. “Study on network failure prediction
based on alarm logs.” 2016 3rd MEC International Conference on Big Data and
Smart City (ICBDSC). IEEE, 2016.
[7] Liu, Gengchen, et al. “The first testbed demonstration of cognitive end-to-end op-
tical service provisioning with hierarchical learning across multiple autonomous
systems.” Optical Fiber Communication Conference. Optical Society of America,
2018.
[8] Yongli Zhao, et al., Coordination between control layer AI and on-board AI in op-
tical transport networks, IEEE/OSA J. Opt. Commun. Networking 12 (1) (2019)
A49–A57.
[9] Johan AK Suykens, Joos Vandewalle, “Least squares support vector machine clas-
sifiers.” Neural processing, letters 9 (3) (1999) 293–300.
Fig. 10. Comparison of prediction precision of various algorithms. [10] Gunnar Rätsch, Takashi Onoda, K.-R. Müller, “Soft margins for AdaBoost.” Machine
learning 42 (3) (2001) 287–320.
[11] Yongli Zhao, et al., SOON: self-optimizing optical networks with machine learning,
Table 2 Opt. Express 26 (22) (2018) 28713–28726.
The relationship between precision and data volume. [12] Zhang, Bing, et al. “Failure Disposal by Interaction of the Cross-layer Artificial
Intelligence on ONOS-based SDON Platform.” 2019 Optical Fiber Communications
Precision Algorithm Data Volume Conference and Exhibition (OFC). IEEE, 2019.
[13] Yoshikane, Noboru. “Applications of SDN-enabled optical transport network and
Ref. [5] 95% DES-SVM 14,080 cloud/edge computing technology.” 2019 Optical Fiber Communications
Ref. [8] 99% ANN 19,904 Conference and Exhibition (OFC). IEEE, 2019.
This paper 95% RF 1468 [14] Pei-chao Zhang, Xiang Gao, Analysis of reliability and component importance for
all-digital protective systems, Proc.-Chinese Soc. Elec. Eng. 28 (1) (2008) 77.
[15] Nitesh V. Chawla, et al., SMOTE: synthetic minority over-sampling technique, J.
Artif. Intelligence Res. 16 (2002) 321–357.
References [16] Bodon, Ferenc. “A fast APRIORI implementation.” FIMI. Vol. 3. 2003.
[17] Gers, Felix A., Jürgen Schmidhuber, and Fred Cummins. “Learning to forget:
[1] Francesco Musumeci, et al., An overview on application of machine learning Continual prediction with LSTM.” (1999): 850-855.

You might also like