Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

Download and Read online, DOWNLOAD EBOOK, [PDF EBOOK EPUB ], Ebooks

download, Read Ebook EPUB/KINDE, Download Book Format PDF

Proceedings of ELM 2018 Jiuwen Cao

OR CLICK LINK
https://textbookfull.com/product/proceedings-of-
elm-2018-jiuwen-cao/

Read with Our Free App Audiobook Free Format PFD EBook, Ebooks dowload PDF
with Andible trial, Real book, online, KINDLE , Download[PDF] and Read and Read
Read book Format PDF Ebook, Dowload online, Read book Format PDF Ebook,
[PDF] and Real ONLINE Dowload [PDF] and Real ONLINE
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Proceedings of ELM2019 Jiuwen Cao

https://textbookfull.com/product/proceedings-of-elm2019-jiuwen-
cao/

Elm in Action 1st Edition Richard Feldman

https://textbookfull.com/product/elm-in-action-1st-edition-
richard-feldman/

Proceedings of International Ethical Hacking Conference


2018 eHaCON 2018 Kolkata India Mohuya Chakraborty

https://textbookfull.com/product/proceedings-of-international-
ethical-hacking-conference-2018-ehacon-2018-kolkata-india-mohuya-
chakraborty/

Environmental Geotechnology Proceedings of EGRWSE 2018


Arvind Kumar Agnihotri

https://textbookfull.com/product/environmental-geotechnology-
proceedings-of-egrwse-2018-arvind-kumar-agnihotri/
Proceedings of Italian Concrete Days 2018 Marco Di
Prisco

https://textbookfull.com/product/proceedings-of-italian-concrete-
days-2018-marco-di-prisco/

Foundations of Average-Cost Nonhomogeneous Controlled


Markov Chains Xi-Ren Cao

https://textbookfull.com/product/foundations-of-average-cost-
nonhomogeneous-controlled-markov-chains-xi-ren-cao/

Proceedings of International Conference on


Computational Intelligence and Data Engineering
Proceedings of ICCIDE 2018 Nabendu Chaki

https://textbookfull.com/product/proceedings-of-international-
conference-on-computational-intelligence-and-data-engineering-
proceedings-of-iccide-2018-nabendu-chaki/

Data Analytics and Learning Proceedings of DAL 2018 P.


Nagabhushan

https://textbookfull.com/product/data-analytics-and-learning-
proceedings-of-dal-2018-p-nagabhushan/

ICCCE 2018: Proceedings of the International Conference


on Communications and Cyber Physical Engineering 2018
Amit Kumar

https://textbookfull.com/product/iccce-2018-proceedings-of-the-
international-conference-on-communications-and-cyber-physical-
engineering-2018-amit-kumar/
Proceedings in Adaptation, Learning and Optimization 11

Jiuwen Cao
Chi Man Vong
Yoan Miche
Amaury Lendasse Editors

Proceedings
of ELM 2018
Proceedings in Adaptation, Learning
and Optimization

Volume 11

Series Editors
Meng-Hiot Lim, Nanyang Technological University, Singapore, Singapore
Yew Soon Ong, Nanyang Technological University, Singapore, Singapore
The role of adaptation, learning and optimization are becoming increasingly
essential and intertwined. The capability of a system to adapt either through
modification of its physiological structure or via some revalidation process of
internal mechanisms that directly dictate the response or behavior is crucial in many
real world applications. Optimization lies at the heart of most machine learning
approaches while learning and optimization are two primary means to effect
adaptation in various forms. They usually involve computational processes
incorporated within the system that trigger parametric updating and knowledge
or model enhancement, giving rise to progressive improvement. This book series
serves as a channel to consolidate work related to topics linked to adaptation,
learning and optimization in systems and structures. Topics covered under this
series include:
• complex adaptive systems including evolutionary computation, memetic com-
puting, swarm intelligence, neural networks, fuzzy systems, tabu search, sim-
ulated annealing, etc.
• machine learning, data mining & mathematical programming
• hybridization of techniques that span across artificial intelligence and compu-
tational intelligence for synergistic alliance of strategies for problem-solving
• aspects of adaptation in robotics
• agent-based computing
• autonomic/pervasive computing
• dynamic optimization/learning in noisy and uncertain environment
• systemic alliance of stochastic and conventional search techniques
• all aspects of adaptations in man-machine systems.
This book series bridges the dichotomy of modern and conventional mathematical
and heuristic/meta-heuristics approaches to bring about effective adaptation,
learning and optimization. It propels the maxim that the old and the new can
come together and be combined synergistically to scale new heights in problem-
solving. To reach such a level, numerous research issues will emerge and
researchers will find the book series a convenient medium to track the progresses
made.
** Indexing: The books of this series are submitted to ISI Proceedings, DBLP,
Google Scholar and Springerlink **

More information about this series at http://www.springer.com/series/13543


Jiuwen Cao Chi Man Vong
• •

Yoan Miche Amaury Lendasse


Editors

Proceedings of ELM 2018

123
Editors
Jiuwen Cao Chi Man Vong
Institute of Information and Control Department of Computer and Information
Hangzhou Dianzi University Science
Xiasha, Hangzhou, China University of Macau
Taipa, Macao
Yoan Miche
Nokia Bell Labs Amaury Lendasse
Cybersecurity Research Department of Information and Logistics
Espoo, Finland Technology
University of Houston
Houston, TX, USA

ISSN 2363-6084 ISSN 2363-6092 (electronic)


Proceedings in Adaptation, Learning and Optimization
ISBN 978-3-030-23306-8 ISBN 978-3-030-23307-5 (eBook)
https://doi.org/10.1007/978-3-030-23307-5
© Springer Nature Switzerland AG 2020
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Contents

Random Orthogonal Projection Based Enhanced Bidirectional


Extreme Learning Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Weipeng Cao, Jinzhu Gao, Xizhao Wang, Zhong Ming, and Shubin Cai
A Novel Feature Specificity Enhancement for Taste Recognition
by Electronic Tongue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Yanbing Chen, Tao Liu, Jianjun Chen, Dongqi Li, and Mengya Wu
Comparison of Classification Methods for Very High-Dimensional
Data in Sparse Random Projection Representation . . . . . . . . . . . . . . . . 17
Anton Akusok and Emil Eirola
A Robust and Dynamically Enhanced Neural Predictive Model
for Foreign Exchange Rate Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Lingkai Xing, Zhihong Man, Jinchuan Zheng, Tony Cricenti,
and Mengqiu Tao
Alzheimer’s Disease Computer Aided Diagnosis Based on Hierarchical
Extreme Learning Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Zhongyang Wang, Junchang Xin, Yue Zhao, and Qiyong Guo
Key Variables Soft Measurement of Wastewater Treatment Process
Based on Hierarchical Extreme Learning Machine . . . . . . . . . . . . . . . . 45
Feixiang Zhao, Mingzhe Liu, Binyang Jia, Xin Jiang, and Jun Ren
A Fast Algorithm for Sparse Extreme Learning Machine . . . . . . . . . . . 55
Zhihong Miao and Qing He
Extreme Latent Representation Learning for Visual Classification . . . . 65
Tan Guo, Lei Zhang, and Xiaoheng Tan
An Optimized Data Distribution Model for ElasticChain to Support
Blockchain Scalable Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Dayu Jia, Junchang Xin, Zhiqiong Wang, Wei Guo, and Guoren Wang

v
vi Contents

An Algorithm of Sina Microblog User’s Sentimental Influence


Analysis Based on CNN+ELM Model . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Donghong Han, Fulin Wei, Lin Bai, Xiang Tang, TingShao Zhu,
and Guoren Wang
Extreme Learning Machine Based Intelligent Condition Monitoring
System on Train Door . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Xin Sun, K. V. Ling, K. K. Sin, and Lawrence Tay
Character-Level Hybrid Convolutional and Recurrent Neural
Network for Fast Text Categorization . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Bing Liu, Yong Zhou, and Wei Sun
Feature Points Selection for Rectangle Panorama Stitching . . . . . . . . . . 118
Weiqing Yan, Shuigen Wang, Guanghui Yue, Jindong Xu,
Xiangrong Tong, and Laihua Wang
Point-of-Interest Group Recommendation with an Extreme
Learning Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Zhen Zhang, Guoren Wang, and Xiangguo Zhao
Research on Recognition of Multi-user Haptic Gestures . . . . . . . . . . . . 134
Lu Fang, Huaping Liu, and Yanzhi Dong
Benchmarking Hardware Accelerating Techniques for Extreme
Learning Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Liang Li, Guoren Wang, Gang Wu, and Qi Zhang
An Event Recommendation Model Using ELM in Event-Based
Social Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Boyang Li, Guoren Wang, Yurong Cheng, and Yongjiao Sun
Reconstructing Bifurcation Diagrams of a Chaotic Neuron Model
Using an Extreme Learning Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Yoshitaka Itoh and Masaharu Adachi
Extreme Learning Machine for Multi-label Classification . . . . . . . . . . . 173
Haigang Zhang, Jinfeng Yang, Guimin Jia, and Shaocheng Han
Accelerating ELM Training over Data Streams . . . . . . . . . . . . . . . . . . . 182
Hangxu Ji, Gang Wu, and Guoren Wang
Predictive Modeling of Hospital Readmissions with Sparse Bayesian
Extreme Learning Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Nan Liu, Lian Leng Low, Sean Shao Wei Lam, Julian Thumboo,
and Marcus Eng Hock Ong
Rising Star Classification Based on Extreme Learning Machine . . . . . . 197
Yuliang Ma, Ye Yuan, Guoren Wang, Xin Bi, Zhongqing Wang,
and Yishu Wang
Contents vii

Hand Gesture Recognition Using Clip Device Applicable to Smart


Watch Based on Flexible Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Sung-Woo Byun, Da-Kyeong Oh, MyoungJin Son, Ju Hee Kim,
Ye Jin Lee, and Seok-Pil Lee
Receding Horizon Optimal Control of Hybrid Electric Vehicles Using
ELM-Based Driver Acceleration Rate Prediction . . . . . . . . . . . . . . . . . . 216
Jiangyan Zhang, Fuguo Xu, Yahui Zhang, and Tielong Shen
CO-LEELM: Continuous-Output Location Estimation Using Extreme
Learning Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Felis Dwiyasa and Meng-Hiot Lim
Unsupervised Absent Multiple Kernel Extreme Learning Machine . . . . 236
Lingyun Xiang, Guohan Zhao, Qian Li, and Zijie Zhu
Intelligent Machine Tools Recognition Based on Hybrid CNNs
and ELMs Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Kun Zhang, Lu-Lu Tang, Zhi-Xin Yang, and Lu-Qing Luo
Scalable IP Core for Feed Forward Random Networks . . . . . . . . . . . . . 253
Anurag Daram, Karan Paluru, Vedant Karia, and Dhireesha Kudithipudi
Multi-objective Artificial Bee Colony Algorithm with Information
Learning for Model Optimization of Extreme Learning Machine . . . . . 263
Hao Zhang, Dingyi Zhang, and Tao Ku
Short Term PV Power Forecasting Using ELM and Probabilistic
Prediction Interval Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Jatin Verma, Xu Yan, Junhua Zhao, and Zhao Xu
A Novel ELM Ensemble for Time Series Prediction . . . . . . . . . . . . . . . 283
Zhen Li, Karl Ratner, Edward Ratner, Kallin Khan, Kaj-Mikael Bjork,
and Amaury Lendasse
An ELM-Based Ensemble Strategy for POI Recommendation . . . . . . . . 292
Xue He, Tiancheng Zhang, Hengyu Liu, and Ge Yu
A Method Based on S-transform and Hybrid Kernel
Extreme Learning Machine for Complex Power Quality
Disturbances Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Chen Zhao, Kaicheng Li, and Xuebin Xu
Sparse Bayesian Learning for Extreme Learning
Machine Auto-encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Guanghao Zhang, Dongshun Cui, Shangbo Mao, and Guang-Bin Huang
viii Contents

A Soft Computing-Based Daily Rainfall Forecasting Model Using


ELM and GEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
Yuzhong Peng, Huasheng Zhao, Jie Li, Xiao Qin, Jianping Liao,
and Zhiping Liu
Comparing ELM with SVM in the Field of Sentiment Classification
of Social Media Text Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Zhihuan Chen, Zhaoxia Wang, Zhiping Lin, and Ting Yang
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
Random Orthogonal Projection Based
Enhanced Bidirectional Extreme Learning
Machine

Weipeng Cao1, Jinzhu Gao2, Xizhao Wang1, Zhong Ming1(&),


and Shubin Cai1
1
College of Computer Science and Software Engineering, Shenzhen University,
Shenzhen 518060, China
[email protected], {xzwang,mingz,
shubin}@szu.edu.cn
2
School of Engineering and Computer Science, University of the Pacific,
Stockton 95211, CA, USA
[email protected]

Abstract. Bidirectional extreme learning machine (B-ELM) divides the


learning process into two parts: At odd learning step, the parameters of the new
hidden node are generated randomly, while at even learning step, the parameters
of the new hidden node are obtained analytically from the parameters of the
former node. However, some of the odd-hidden nodes play a minor role, which
will have a negative impact on the even-hidden nodes, and result in a sharp rise
in the network complexity. To avoid this issue, we propose a random orthogonal
projection based enhanced bidirectional extreme learning machine algorithm
(OEB-ELM). In OEB-ELM, several orthogonal candidate nodes are generated
randomly at each odd learning step, only the node with the largest residual error
reduction will be added to the existing network. Experiments on six real datasets
have shown that the OEB-ELM has better generalization performance and sta-
bility than B-ELM, EB-ELM, and EI-ELM algorithms.

Keywords: Extreme learning machine  Bidirectional extreme learning machine 


Random orthogonal projection

1 Introduction

The training mechanism of traditional single hidden layer feed-forward neural networks
(SLFN) is that the input weights and hidden bias are randomly assigned initial values
and then iteratively tuned with methods such as gradient descent until the residual error
reaches the expected value. This method has several notorious drawbacks such as slow
convergence rate and local minima problem.
Different from traditional SLFN, neural networks with random weights (NNRW)
train models in a non-iterative way [1, 2]. In NNRW, the input weights and hidden bias
are randomly generated from a given range and kept fixed throughout the training

© Springer Nature Switzerland AG 2020


J. Cao et al. (Eds.): ELM 2018, PALO 11, pp. 1–10, 2020.
https://doi.org/10.1007/978-3-030-23307-5_1
2 W. Cao et al.

process, while the output weights are obtained by solving a linear system of matrix
equations. Compared with traditional SLFN, NNRW can learn faster with acceptable
accuracy.
Extreme learning machine (ELM) is a typical NNRW, which was proposed by
Huang et al. in 2004 [3]. ELM inherits the advantages of NNRW and extends it to a
unified form. In recent years, many ELM based algorithms have been proposed [4–6]
and applied to various fields such as unsupervised learning [7] and traffic sign
recognition [8]. Although ELM and its variants have achieved many interesting results,
there are still several important problems that have not been solved thoroughly, one of
which is the determination of the number of hidden nodes [1, 9].
In recent years, many algorithms have been proposed to determine the number of
hidden nodes. We can group them into two categories: incremental and pruning
strategies. For incremental strategy, the model begins with a small initial network and
then gradually adds new hidden nodes until the desired accuracy is achieved. Some
notable incremental algorithms include I-ELM [10], EI-ELM [11], B-ELM [12],
EB-ELM [13], etc. For pruning strategy, the model begins with a larger than necessary
network and then cuts off the redundant or less effective hidden nodes. Some notable
destructive algorithms include P-ELM [14], OP-ELM [15], etc.
This paper focuses on optimizing the performance of the existing B-ELM algo-
rithm. In B-ELM [12], the authors divided the learning process into two parts: the odd
and the even learning steps. At the odd learning steps, the new hidden node is gen-
erated randomly at one time, while at the even learning steps, the new hidden node is
determined by a formula defined by the former added node parameters. Compared with
the fully random incremental algorithms such as I-ELM and EI-ELM, B-ELM shows a
much faster convergence rate.
From the above analysis, we can infer that the hidden nodes generated at the odd
learning steps (the odd-hidden nodes) play an important role in B-ELM models.
However, the quality of the odd-hidden nodes cannot be guaranteed. Actually, some of
them may play a minor role, which will cause a sharp rise in the network complexity.
The initial motivation of this study is to alleviate this issue.
Orthogonalization technique is one of the effective algorithms for parameter opti-
mization. Wang et al. [16] proved that the ELM model with the random orthogonal
projection has better capability of sample structure preserving (SSP). Kasun et al. [17]
stacked the ELM auto-encoders into a deep ELM architecture based on the orthogonal
weight matrix. Huang et al. [18] orthogonalized the input weights matrix when building
the local receptive fields based ELM model.
Inspired by the above works, in this study, we propose a novel random orthogonal
projection based enhanced bidirectional extreme learning machine algorithm (OEB-
ELM). In OEB-ELM, at each odd learning step, we first randomly generate K candidate
hidden nodes and orthogonalize them into orthogonal hidden nodes based on the Gram-
Schmidt orthogonalization method. Then we train it as an initial model for hidden
nodes selection. After obtaining the corresponding value of residual error reduction for
each candidate node, the one with the largest residual error reduction will be selected as
the final odd-hidden node and added to the existing network. The even-hidden nodes
are obtained in the same way as B-ELM and EB-ELM.
Random Orthogonal Projection 3

Our main contributions in this study are as follows.


(1) The odd learning steps in B-ELM are optimized and better hidden nodes can be
obtained. Compared with the B-ELM, the proposed algorithm achieves models
with better generalization performance and smaller network structure.
(2) The method to set the number of candidate hidden nodes in EB-ELM is improved.
In OEB-ELM, the number of candidate hidden nodes is automatically determined
according to data attributes, which can effectively improve the computational
efficiency and reduce the human intervention to the model.
(3) The random orthogonal projection technique is used to improve the capability of
SSP of the candidate hidden nodes selection model, and thus the quality of the
hidden nodes is further improved. Experiments on six UCI regression datasets have
demonstrated the efficiency of our method.
The organization of this paper is as follows: Sect. 2 briefly reviews the related
algorithms. The proposed OEB-ELM is described in Sect. 3. The details of the
experiment results and analysis are given in Sect. 4. Section 5 concludes the paper.

2 Review of ELM, I-ELM, B-ELM and EB-ELM

A typical network structure of ELM with single hidden layer is shown in Fig. 1. The
training mechanism of ELM is that the input weights x and hidden bias b are generated
randomly from a given range and kept fixed throughout the training process, while the
output weights b are obtained by solving a system of matrix equation.

Fig. 1. A basic ELM neural network structure


4 W. Cao et al.

The above ELM network can be modeled as

X
L
bi gðxi  xj þ bi Þ ¼ tj ; xi 2 Rn ; bi 2 R; j ¼ 1; . . .; N ð1Þ
i¼1

where gðÞ denotes the activation function, tj denotes the actual value of each sample,
and N is the size of the dataset. Equation (1) can be rewritten as

Hb ¼ T ð2Þ
0 1 2 3 2 3
gðx1  x1 þ b1 Þ ... gðxL  x1 þ bL Þ bT1 t1T

where H ¼ B
@
..
.
..
.
..
.
C
A , b¼6 7
4 ... 5
6 .. 7
, T¼4 . 5 .
gðx1  xN þ b1 Þ    gðxL  xN þ bL Þ NL bTL Lm
tNT Nm

In Eq. (2), H represents the hidden layer output matrix of ELM and the output
weight b can be obtained by

b ¼ HþT ð3Þ

where H þ is the Moore–Penrose generalized inverse of H.


The residual error measures the closeness between the current network fn with n
hidden nodes and the target function f , which can be summarized as

en ¼ f n  f ð4Þ

In the I-ELM algorithm [10], the random hidden nodes are added to the hidden
layer one by one and the parameters of the existing hidden nodes stay the same after a
new hidden node is added. The output function fn at the nth step can be expressed by

fn ðxÞ ¼ fn1 ðxÞ þ bn Gn ðxÞ ð5Þ

where bn denotes the output weights between the new added hidden node and the
output nodes, and Gn ðxÞ is the corresponding output of the hidden node.
The I-ELM can automatically generate the network structure; however, the network
structure is often very complex because some of the hidden nodes play a minor role in
the network. To alleviate this issue, the EI-ELM [11] and B-ELM [12] algorithms were
proposed. The core idea of the EI-ELM algorithm is to generate K candidate hidden
nodes at each learning step and only select the one with the smallest residual error.
Actually, the I-ELM is a special case of the EI-ELM when K ¼ 1.
Random Orthogonal Projection 5

Different from the EI-ELM, the B-ELM divides the training process into two parts:
the odd and the even learning steps. At each odd learning step (i.e., when the number of
hidden nodes L 2 f2n þ 1; n 2 Zg), the new hidden node is generated randomly as in
the I-ELM. At each even learning step (i.e., when the number of hidden nodes
L 2 f2n; n 2 Zg), the parameters of the new hidden node are obtained by

^ 2n ¼ g1 ðuðH2n ÞÞx1


x ð6Þ
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
^b2n ¼ mseðg1 ðuðH2n ÞÞ  x2n xÞ ð7Þ

^ 2n ¼ u1 ðg1 ðx2n x þ b2n ÞÞ


H ð8Þ

where g1 and u1 denote the inverse functions of g and u, respectively.
From the training mechanism of the B-ELM mentioned above, we can infer that the
hidden nodes generated at the odd learning steps have a significant impact on the model
performance. However, the B-ELM cannot guarantee the quality of these hidden nodes.
The odd-hidden nodes that play a minor role in the network will cause a sharp rise in
the network complexity.
To avoid this issue, we proposed the enhanced random search method to optimize
the odd learning steps of B-ELM, that is, the EB-ELM algorithm [13]. In EB-ELM, at
each odd learning step, K candidate hidden nodes are generated and only the one with
the largest residual error reduction will be selected. EB-ELM can achieve better gen-
eralization performance than B-ELM. However, the number of candidate nodes K in
EB-ELM is assigned based on experience, which makes it difficult to balance the
computational efficiency and model performance.

3 The Proposed Algorithm

In this section, we propose a random orthogonal projection based enhanced bidirec-


tional extreme learning machine (OEB-ELM) for regression problems.
Theorem 1. Suppose WKK is an orthogonal matrix, which satisfies W T W ¼ I, then
for any X 2 Rn , jWXjj2 ¼ jjXjj2 .
Proof. jWXjj2 ¼ X T W T WX ¼ X T IX ¼ jjXjj2 .
From Theorem 1 and its proof, we can infer that the orthogonal projection can
provide good capability of sample structure preserving for the initial model, which will
improve the performance of the model and ensure the good quality of the candidate
nodes. The proposed OEB-ELM can be summarized as follows:
6 W. Cao et al.

OEB-ELM Algorithm:

N n
Input: A training dataset D ={( xi , ti )}i =1 ⊂ R × R , the number of
hidden nodes L , an activation function G , a maximum num-
ber of hidden nodes Lmax and an expected error ε .
Output: The model structure and output weights matrix β .
Step 1 Initialization:
Let the number of hidden nodes L = 0 and the residual er-
ror be E = T . Set K = d , where K denotes the maximum num-
ber of trials of assigning candidate nodes at each odd
learning step, d denotes the number of data attributes.
Step 2 Learning step:
while L < Lmax and E > ε do
(a) Increase the number of hidden nodes L : L = L + 1 ;
(b) If L ∈ {2n + 1, n ∈ Z } then
Generate randomly the input weights matrix
Wrandom = [ω(1) , ω(2) , ⋅⋅⋅, ω( j ) ]K *K and the random hidden bias matrix
Brandom = [b(1) , b(2) , ⋅⋅⋅, b( j ) ]K *1 ;
Orthogonalize Wrandom and Brandom using the Gram-Schmidt or-
thogonalization method to obtain Worth and Borth , which sat-
T T
isfy Worth Worth = I and Borth Borth = 1 , respectively.
Calculate the temporal output weights β temp according to
β temp = H temp
+
T.
for j = 1 : K
Calculate the residual error E( j ) after pruning the
jth hidden node:

E( j ) = T − H residual ⋅ β residual

End for
*
(c) Let j = { j | max 1≤ j ≤ k || E ( j)
||} . Set ω L = ωorth ( j ) , and bL = b( j ) .
* *

Update H L for the new hidden node and calculate the re-
sidual error E after adding the new hidden node Lth :
E = E − H LβL .
End if
(d) if L ∈ {2n, n ∈ Z } then
Calculate the error feedback function sequence H L ac-
cording to the equation
−1
H 2 n = e2 n −1 ⋅ ( β 2 n −1 )
Calculate the parameter pair (ω L , bL ) and update H L based
on the equations (6), (7), and (8).
Calculate the output weight β L according to
〈 e2 n −1 , H 2 n 〉
β2n = 2
H 2n
Calculate E after adding the new hidden node Lth :
E = E − H LβL .
End if
End while
Random Orthogonal Projection 7

4 Experimental Results and Analysis

In this section, we present the details of our experiment settings and results. Our
experiments are conducted on six benchmark regression problems from UCI machine
learning repository [19] and the specification of these datasets is given in Table 1. We
chose the Sigmoid function (i.e., Gðx; x; bÞ ¼ 1=ð1 þ expððxx þ bÞÞÞ) as the acti-
vation function of B-ELM, EB-ELM, EI-ELM, and OEB-ELM. The input weights x
are randomly generated from the range of (−1, 1) and the hidden biases b are generated
randomly from the range of (0, 1) using a uniform sampling distribution. For each
regression problem, the average results over 50 trials are obtained for each algorithm.
In this study, we did our experiments in the MATLAB R2014a environment on the
same Windows 10 with Intel Core i5 2.3 GHz CPU and 8 GB RAM.

Table 1. Specification of six regression datasets


Name Training data Testing data Attributes
Airfoil Self-noise 750 753 5
Housing 250 256 13
Concrete compressive strength 500 530 8
White wine quality 2000 2898 11
Abalone 2000 2177 8
Red wine quality 800 799 11

Our experiments are conducted based on the following two questions:


(1) Under the same network structure, which algorithm can achieve the best gener-
alization performance and stability?
(2) With the increase of the number of hidden nodes, which algorithm has the fastest
convergence rate?
For Question (1), we set the same number of hidden nodes for the B-ELM,
EB-ELM, EI-ELM, and OEB-ELM algorithms. The Root-Mean-Square Error of
Testing (Testing RMSE), the Root-Mean-Square Error of Training (Training RMSE),
Standard Deviation of Testing RMSE (SD), and learning time are selected as the
indicators for performance testing. The smaller testing RMSE denotes the better gen-
eralization performance of the algorithm and the smaller SD indicates the better
stability of the algorithm. The performance comparison of the four algorithms is shown
in Table 2. It is noted that the close results are underlined and the best results are in
boldface.
From Table 2, we observe that the proposed OEB-ELM algorithm has smallest
testing RMSE and standard deviation when applied on six regression datasets, which
means that OEB-ELM can achieve better generalization performance and stability than
B-ELM, EB-ELM, and EI-ELM. It is noted that the EI-ELM algorithm runs the longest
time on all datasets, which shows that the OEB-ELM algorithm is more efficient than
the EI-ELM algorithm.
8 W. Cao et al.

Table 2. Performance comparison of the EB-ELM, B-ELM, EI-ELM, and OEB-ELM


Datasets Algorithm Learning Standard Training Testing
time (s) deviation RMSE RMSE
Airfoil self-noise EB-ELM 5.5403 0.0025 0.0709 0.0726
B-ELM 1.2950 0.0032 0.0729 0.0749
EI-ELM 25.3281 0.2230 0.0469 0.2567
OEB-ELM 5.7078 0.0025 0.0715 0.0723
Housing EB-ELM 12.6897 0.0012 0.0182 0.0196
B-ELM 2.9103 0.0060 0.0214 0.0236
EI-ELM 23.9253 0.1165 0.0043 0.2232
OEB-ELM 15.8434 0.0010 0.0182 0.0192
Concrete EB-ELM 13.1275 0.0033 0.0216 0.0232
compressive B-ELM 2.8516 0.0225 0.0229 0.0338
strength EI-ELM 24.5106 0.0323 0.0075 0.0439
OEB-ELM 13.3669 0.0012 0.0214 0.0222
White wine EB-ELM 13.1369 0.0015 0.0150 0.0167
B-ELM 3.1109 0.0071 0.0159 0.0181
EI-ELM 32.2019 75.0810 0.0107 44.2757
OEB-ELM 16.7169 0.0013 0.0146 0.0162
Abalone EB-ELM 12.7944 0.0079 0.0106 0.0143
B-ELM 3.0247 0.0149 0.0099 0.0206
EI-ELM 32.0638 0.0078 <0.0001 0.0164
OEB-ELM 11.1956 0.0071 0.0130 0.0136
Red wine EB-ELM 3.5128 0.0064 0.0608 0.0659
B-ELM 0.8991 0.0044 0.0625 0.0652
EI-ELM 7.1844 0.0060 0.0522 0.0657
OEB-ELM 4.1181 0.0020 0.0589 0.0620

From the above analysis, it can be seen that the proposed OEB-ELM algorithm
achieves models with better generalization performance and stability than the B-ELM
and EB-ELM algorithms. Compared with the EI-ELM algorithm, the OEB-ELM has
higher computational efficiency and can achieve better generalization performance and
stability in most cases.
For Question (2), we gradually increase the number of hidden nodes from 1 to 50
and record the corresponding testing RMSE in the process of adding the hidden nodes.
The performance comparison of each algorithm with the increase of hidden nodes is
shown in Fig. 2.
Figure 2 shows the changes of testing RMSE of the four algorithms with increasing
hidden nodes on Housing dataset. From Fig. 2, we can observe that the B-ELM,
EB-ELM, and OEB-ELM algorithms achieve high accuracy with a few hidden nodes,
which means that the three algorithms converge faster than the EI-ELM algorithm. We
also observe that the OEB-ELM algorithm has smaller testing RMSE and fluctuation
Random Orthogonal Projection 9

Fig. 2. The testing RMSE updating curves of the EB-ELM, B-ELM, EI-ELM, and OEB-ELM

than other algorithms, which means that the OEB-ELM algorithm achieves models
with better generalization performance and stability than the B-ELM, EB-ELM, and
EI-ELM algorithms. Similar results can be found in other cases.

5 Conclusions

In this study, we proposed a novel random orthogonal projection based enhanced


bidirectional extreme learning machine algorithm (OEB-ELM) for regression prob-
lems. In OEB-ELM, the odd-hidden nodes are optimized using the random orthogonal
projection method and improved enhanced random search method. Compared with
B-ELM, OEB-ELM has better generalization performance and smaller network
structure. Compared with EB-ELM, the number of candidate hidden nodes in
OEB-ELM can be automatically determined from data attributes. Note that both
EB-ELM and B-ELM are the specific cases of OEB-ELM. Specifically, EB-ELM is the
non-automated and non-orthogonal OEB-ELM, while B-ELM is the non-orthogonal
OEB-ELM when the number of candidate nodes K ¼ 1.

Acknowledgment. This research was supported by the National Natural Science Foundation of
China (61672358).
10 W. Cao et al.

References
1. Cao, W.P., Wang, X.Z., Ming, Z., Gao, J.Z.: A review on neural networks with random
weights. Neurocomputing 275, 278–287 (2018)
2. Cao, J.W., Lin, Z.P.: Extreme learning machines on high dimensional and large data
applications: a survey. Math. Prob. Eng. 2015, 1–13 (2015)
3. Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine a new learning scheme of
feedforward neural networks. In: Proceedings of 2004 IEEE international joint conference on
neural networks, vol. 2, pp. 985–990 (2004)
4. Zhang, L., Zhang, D.: Evolutionary cost-sensitive extreme learning machine. IEEE Trans.
Neural Netw. Learn. Syst. 28(12), 3045–3060 (2017)
5. Ding, S.F., Guo, L.L., Hou, Y.L.: Extreme learning machine with kernel model based on
deep learning. Neural Comput. Appl. 28(8), 1975–1984 (2017)
6. Zhang, H.G., Zhang, S., Yin, Y.X.: Online sequential ELM algorithm with forgetting factor
for real applications. Neurocomputing 261, 144–152 (2017)
7. He, Q., Jin, X., Du, C.Y., Zhuang, F.Z., Shi, Z.Z.: Clustering in extreme learning machine
feature space. Neurocomputing 128, 88–95 (2014)
8. Huang, Z., Yu, Y., Gu, J., Liu, H.: An efficient method for traffic sign recognition based on
extreme learning machine. IEEE Trans. Cybern. 47(4), 920–933 (2017)
9. Cao, W.P., Gao, J.Z., Ming, Z., Cai, S.B.: Some tricks in parameter selection for extreme
learning machine. IOP Conf. Ser. Mater. Sci. Eng. 261(1), 012002 (2017)
10. Huang, G.B., Chen, L., Siew, C.K.: Universal approximation using incremental constructive
feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 17(4), 879–892
(2006)
11. Huang, G.B., Chen, L.: Enhanced random search based incremental extreme learning
machine. Neurocomputing 71(16–18), 3460–3468 (2008)
12. Yang, Y.M., Wang, Y.N., Yuan, X.F.: Bidirectional extreme learning machine for regression
problem and its learning effectiveness. IEEE Trans. Neural Netw. Learn. Syst. 23(9),
1498–1505 (2012)
13. Cao, W.P., Ming, Z., Wang, X.Z., Cai, S.B.: Improved bidirectional extreme learning
machine based on enhanced random search. Memetic Comput., 1–8 (2017). https://doi.org/
10.1007/s12293-017-0238-1
14. Rong, H.J., Ong, Y.S., Tan, A.H., Zhu, Z.: A fast pruned-extreme learning machine for
classification problem. Neurocomputing 72, 359–366 (2008)
15. Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: OPELM: optimally
pruned extreme learning machine. IEEE Trans. Neural Netw. 21(1), 158–162 (2010)
16. Wang, W.H., Liu, X.Y.: The selection of input weights of extreme learning machine: a
sample structure preserving point of view. Neurocomputing 261, 28–36 (2017)
17. Kasun, L.L.C., Zhou, H., Huang, G.B., Vong, C.M.: Representational learning with extreme
learning machine for big data. IEEE Intell. Syst. 28, 31–34 (2013)
18. Huang, G.B., Bai, Z., Kaun, L.L.C., Vong, C.M.: Local receptive fields based extreme
learning machine. IEEE Comput. Intell. Mag. 10, 18–29 (2015)
19. Blake, C., Merz, C.: UCI repository of machine learning databases. Technical report, Dept.
Inf. Comput. Sci., Univ. California, Irvine, CA, USA (1998). http://archive.ics.uci.edu/ml/
A Novel Feature Specificity Enhancement
for Taste Recognition by Electronic Tongue

Yanbing Chen, Tao Liu(&), Jianjun Chen, Dongqi Li,


and Mengya Wu

School of Microelectronics and Communication Engineering,


No. 174 Shazheng Street, Shapingba District, Chongqing 400044, China
{yanbingchen,cquliutao,cjj,20161213071}@cqu.edu.cn,
[email protected]

Abstract. An electronic tongue (E-Tongue) is a bionic system that relies on an


array of electrode sensors to realize taste perception. Large pulse voltammetry
(LAPV) is an important E-Tongue type which generally generates a large
amount of response data. Considering that high common-mode characteristics
existing in sensor arrays largely depress the recognition performance, we pro-
pose an alternative feature extraction method for sensor specificity enhancement,
which is called feature specificity enhancement (FSE). Specifically, the pro-
posed FSE method measures sensor specificity on paired sensor responses and
utilizes kernel function for nonlinear projection. Meanwhile, kernel extreme
learning machine (KELM) is utilized to evaluate the overall performance of
recognition. In experimental evaluation, we introduce several feature extraction
methods and classifiers for comparison. The results indicate that the proposed
feature extraction combined with KELM shows the highest recognition accuracy
of 95% on our E-Tongue dataset, which is superior to other methods in both
effectiveness and efficiency.

Keywords: Electronic tongue  Specificity enhancement  Kernel function

1 Introduction

An electronic tongue (E-Tongue) is a common type of artificial-taste equipment for


liquid phase analysis [1–3]. It relies on an array of sensors with low selectivity and
proper pattern recognition methods to realize taste identifications like human beings
[4]. Many E-Tongue applications have been reported [5–13]. Among them, both tea
and wine are popular in recent E-Tongue academic works.
Current E-Tongues are mainly divided into two categories: potentiometric [14] and
voltammetric types [15]. Compared with the former type, the latter one attaches more
attention [16–18]. Among voltammetric E-Tongues, the large amplitude pulse
voltammetry (LAPV) type is popular. For feature extraction of LAPV, recent studies
handle the responses of single electrode directly. However, they ignore the common-
mode signals existed between different electrodes in a sensor array which may be
harmful to classification.

© Springer Nature Switzerland AG 2020


J. Cao et al. (Eds.): ELM 2018, PALO 11, pp. 11–16, 2020.
https://doi.org/10.1007/978-3-030-23307-5_2
12 Y. Chen et al.

In this paper, we propose a feature enhancement method using a nonlinear


specificity metric, which can alleviate the common-mode components in sensor
responses. Coupled with the kernel extreme learning machine (KELM), the proposed
method obtains the highest recognition rate in evaluation among several referenced
methods. It has been proved that the nonlinear specificity enhancement associates with
KELM helps the data analysis of LAPV based E-Tongue apparently.
The rest of this paper is organized as follows. Section 2 introduces the proposed
method. The experimental results and analysis are presented in Sect. 3. Finally, con-
clusions are presented in the last section.

2 Methods

Notations. In this paper, X ¼ ½x1; x2 ; . . .; xm T 2 Rmd represents the raw data of certain
sample, where m represents the sample dimension equivalent to the number of sensors
and d represents the value number of each sensor response.

2.1 Feature Specificity Enhancement


We propose feature specificity enhancement (FSE) scheme for the E-Tongues working
in LAPV manner and perform the feature extraction as follows:

Zijn ¼ jðxni ; xnj Þ; i 6¼ j ð1Þ

where xni represents the i-th sensor response to the n-th sample, Zijn indicates the n-th
sample feature between i-th and j-th sensor responses, jðÞ denotes a kernel function
projects original specificity component to a nonlinear space. Moreover, we introduce
kernel function to solve “dimension disaster” problem in space projection [19] as
follows:
 
 x i  x j 2
jðxi ; xj Þ ¼ expð 2
Þ ð2Þ
2r2
where expðÞ represents an exponential function, r is the width of the kernel function
and kk2 denotes the l2-norm.
In recognition stage, extreme learning machines (ELM) module [20] is a favorable
choice. It randomly initializes the input weights W ¼ ½w1 ; w2 ; . . .; wL T 2 RLD and bias
b ¼ ½b1 ; b2 ; . . .; bL  2 RL , and then the corresponding output weight matrix b 2 RLC
can be analytically calculated based on the output matrix of the hidden layer. The
output matrix H of hidden layer with L hidden neurons is computed as:
2  T   3
g w 1 x 1 þ b1  g wTL x1 þ bL
6 .. .. .. 7
H¼4 5 ð3Þ
 T .  .  T . 
g w 1 xN þ b 1  g wL xN þ bL
A Novel Feature Specificity Enhancement for Taste Recognition 13

where gðÞ is the activation function. If regularization is applied, the ELM learning
model can be expressed as follows:

1 1X N
min kbk2 þ l  knk2
b 2 2 i¼1 ð4Þ
s:t hðxi Þb ¼ ti  ni ; i ¼ 1; 2; . . .; N

where l is the regularization coefficient, n denotes the prediction error of ELM on


training set. The approximate solutions of output weight matrix b are calculated as:
8 1
>
< HT H þ ILL HT T; N  L
l
b¼  1 ð5Þ
>
: HT HT H þ INN T; N\L
l

where T ¼ ½t1 ; t2 ; . . .; tN T 2 RNC denotes the label matrix of training set, N is the
number of training samples. Therefore, the output of ELM can be computed as:

INN 1
f ð xÞ ¼ hð xÞHT ðHT H þ Þ T ð6Þ
l

The KELM additionally introduces a kernel function


  when
 calculating the output of
the network [21], which denotes Kij ¼ hðxi Þ  h xj ¼ k xi ; xj . Thus, Eq. (6) can be
expressed as:
2 3
k ðx; x1 Þ  
6 . 7 INN 1
f ð xÞ ¼ 4 .. 5 Kþ T ð7Þ
l
k ðx; xN Þ

3 Experiments

3.1 Experimental Data


The data acquisition was performed on our own developed E-Tongue system [22].
Seven kinds of drinks including red wine, white spirit, beer, oolong tea, black tea,
maofeng tea, and pu’er tea were selected as test objects. In the experiments, we for-
mulate nine tests for each kind of drinks. Thus, a total of 63 (7 kinds  9 tests) samples
were collected.

3.2 Experimental Results


In this section, we compare FSE method experimentally with three feature extraction
methods: Raw (No treatment), principle component analysis (PCA) and discrete
wavelet transform (DWT). After the original E-Tongue signals processed by different
14 Y. Chen et al.

feature extraction methods, support vector machine (SVM), random forest (RF) and
KELM are implemented as recognition part for evaluation. In this section, we
use leave-one-out (LOO) strategy for cross validation. The average accuracies of cross
validation are reported in Table 1 and the total computation time for cross-validation
training and testing is presented in Table 2. From both Tables 1 and 2, we can have the
following observations:
(1) When SVM is used for classification, the proposed FSE with SVM performs
significantly better than other feature extraction methods. FSE with SVM can
achieve 90.48%. In the view of execution time, FSE reaches the shortest time
expense using SVM than other feature extraction methods.
(2) When RF is used for classification, both FSE and “raw” methods get the highest
average accuracy (82.54%) compared with other feature extraction methods. From
time consumption of computation, FSE is nearly 90 times faster than Raw.
(3) When KELM is adopted, FSE gets the highest accuracy 95.24%. Compared with
raw feature (22.22%), PCA (69.84%) and DWT (88.89%), it is obvious that
KELM shows better fitting and reasoning ability by using proposed FSE feature
extraction method. Moreover, the specificity metric with Hilbert projection is
more favorable to KELM than any other classifiers. As for time consumption, FSE
coupled with KELM cost the least time expense in all methods. It indicates that
KELM keeps the minimum amount of computation while providing excellent
classification results.

Table 1. Accuracy comparison


Feature extraction methods
Raw PCA DWT FSE
RF 82.54% 73.02% 77.78% 82.54%
SVM 84.13% 79.37% 77.78% 90.48%
KELM 22.22% 69.84% 88.89% 95.24%

Table 2. Time consumption comparison


Feature extraction methods
Raw PCA DWT FSE
RF 344.70 s 43.12 s 56.88 s 4.24 s
SVM 48.69 s 37.32 s 52.88 s 2.56 s
KELM 5.63 s 34.40 s 49.73 s 0.06 s

4 Conclusion

In this article, we proposed a FSE method for nonlinear feature extraction in E-Tongue
data and achieves taste recognition by using several typical classifiers such as SVM, RF
and KELM. The proposed FSE coupled with KELM achieves the best results in both
accuracy and computational efficiency on our collected data set by a self-developed
A Novel Feature Specificity Enhancement for Taste Recognition 15

E-Tongue system. We should admit that FSE seems to be effective in dealing with
feature extraction from high dimensional data, especially LAPV signals. On the other
hand, KELM can greatly promote the overall performance in accuracy and speed in
recognition.

References
1. Legin, A., Rudnitskaya, A., Lvova, L., Di Nataleb, C., D’Amicob, A.: Evaluation of Italian
wine by the electronic tongue: recognition, quantitative analysis and correlation with human
sensory perception. Anal. Chim. Acta 484(1), 33–44 (2003)
2. Ghosh, A., Bag, A.K., Sharma, P., et al.: Monitoring the fermentation process and detection
of optimum fermentation time of black tea using an electronic tongue. IEEE Sen-
sors J. 15(11), 6255–6262 (2015)
3. Verrelli, G., Lvova, L., Paolesse, R., et al.: Metalloporphyrin - based electronic tongue: an
application for the analysis of Italian white wines. Sensors 7(11), 2750–2762 (2007)
4. Tahara, Y., Toko, K.: Electronic tongues–a review. IEEE Sensors J. 13(8), 3001–3011
(2013)
5. Kirsanov, D., Legin, E., Zagrebin, A., et al.: Mimicking Daphnia magna, bioassay
performance by an electronic tongue for urban water quality control. Anal. Chim. Acta
824, 64–70 (2014)
6. Wei, Z., Wang, J.: Tracing floral and geographical origins of honeys by potentiometric and
voltammetric electronic tongue. Comput. Electron. Agric. 108, 112–122 (2014)
7. Wang, L., Niu, Q., Hui, Y., Jin, H.: Discrimination of rice with different pretreatment
methods by using a voltammetric electronic tongue. Sensors 15(7), 17767–17785 (2015)
8. Apetrei, I.M., Apetrei, C.: Application of voltammetric e-tongue for the detection of
ammonia and putrescine in beef products. Sens. Actuators B Chem. 234, 371–379 (2016)
9. Ciosek, P., Brzózka, Z., Wróblewski, W.: Classification of beverages using a reduced sensor
array. Sens. Actuators B Chem. 103(1), 76–83 (2004)
10. Domínguez, R.B., Morenobarón, L., Muñoz, R., et al.: Voltammetric electronic tongue and
support vector machines for identification of selected features in Mexican coffee. Sensors
14(9), 17770–17785 (2014)
11. Palit, M., Tudu, B., Bhattacharyya, N., et al.: Comparison of multivariate preprocessing
techniques as applied to electronic tongue based pattern classification for black tea. Anal.
Chim. Acta 675(1), 8–15 (2010)
12. Gutiérrez, M., Llobera, A., Ipatov, A., et al.: Application of an E-tongue to the analysis of
monovarietal and blends of white wines. Sensors 11(5), 4840–4857 (2011)
13. Dias, L.A., et al.: An electronic tongue taste evaluation: identification of goat milk
adulteration with bovine milk. Sens. Actuators B Chem. 136(1), 209–217 (2009)
14. Ciosek, P., Maminska, R., Dybko, A., et al.: Potentiometric electronic tongue based on
integrated array of microelectrodes. Sens. Actuators B Chem. 127(1), 8–14 (2007)
15. Ivarsson, P., et al.: Discrimination of tea by means of a voltammetric electronic tongue and
different applied waveforms. Sens. Actuators B Chem. 76(1), 449–454 (2001)
16. Winquist, F., Wide, P., Lundström, I.: An electronic tongue based on voltammetry. Anal.
Chim. Acta 357(1–2), 21–31 (1997)
17. Tian, S.Y., Deng, S.P., Chen, Z.X.: Multifrequency large amplitude pulse voltammetry:
a novel electrochemical method for electronic tongue. Sens. Actuators B Chem. 123(2),
1049–1056 (2007)
16 Y. Chen et al.

18. Palit, M., Tudu, B., Dutta, P.K., et al.: Classification of black tea taste and correlation with
tea taster’s mark using voltammetric electronic tongue. IEEE Trans. Instrum. Meas. 59(8),
2230–2239 (2010)
19. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)
20. Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: Theory and applications.
Neurocomputing 70(1), 489–501 (2006)
21. Huang, G.B., Zhou, H., Ding, X., et al.: Extreme learning machine for regression and
multiclass classification. IEEE Trans. Syst. Man Cybern. Part B 42(2), 513–529 (2012)
22. Liu, T., Chen, Y., Li, D., et al.: An active feature selection strategy for DWT in artificial
taste. J. Sens. 2018, 1–11 (2018)
Another random document with
no related content on Scribd:
I've been as bald as an onion."
"Sure," drawled McDowell. "The jury will decide." He turned to
O'Toole. "Are you a doctor?"
"I am not a licensed Doctor of Medicine."
"We'll see if what you are doing can be turned into a charge of
practicing with no license."
"I'm not practicing medicine. I'm a follicologist."
"Yeah? Then what's this feather-business all about?"
"Simple. Evolution has caused every genus, every specimen of life
to pass upward from the sea. Hair is evolved from scales and
feathers evolved also from scales.
"Now," continued O'Toole, "baldness is attributed to lack of
nourishment for the hair on the scalp. It dies. The same thing often
occurs in agriculture—"
"What has farming to do with hair-growing?" demanded McDowell.
"I was coming to that. When wheat will grow no longer in a field, they
plant it with corn. It is called 'Rotation of Crops.' Similarly, I cause a
change in the growth-output of the scalp. It starts off with a light
covering of scales, evolves into feathers in a few days, and the
feathers evolve to completion. This takes seven weeks. After this
time, the feathers die because of the differences in evolutionary
ending of the host. Then, with the scalp renewed by the so-called
Rotation of Crops."
"Uh-huh. Well, we'll let the jury decide!"
Two months elapsed before O'Toole came to trial. But meantime, the
judge took a vacation and returned with a luxuriant growth of hair on
his head. The jury was not cited for contempt of court even though
most of them insisted on keeping their hats on during proceedings.
O'Toole had a good lawyer.
And Judge Murphy beamed down over the bench and said: "O'Toole,
you are guilty, but sentence is suspended indefinitely. Just don't get
into trouble again, that's all. And gentlemen, Lieutenant McDowell,
Dr. Muldoon, and Sergeant O'Leary, I commend all of your work and
will direct that you, Mr. McCarthy, be recompensed. As for you," he
said to the ex-featherhead. "Mr. William B. Windsor, we have no use
for foreigners—"
Mr. Windsor never got a chance to state that he was no foreigner; his
mother was a Clancy.
THE END.
*** END OF THE PROJECT GUTENBERG EBOOK ALIEN ***

Updated editions will replace the previous one—the old editions will
be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying copyright
royalties. Special rules, set forth in the General Terms of Use part of
this license, apply to copying and distributing Project Gutenberg™
electronic works to protect the PROJECT GUTENBERG™ concept
and trademark. Project Gutenberg is a registered trademark, and
may not be used if you charge for an eBook, except by following the
terms of the trademark license, including paying royalties for use of
the Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is very
easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the free


distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund from
the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only be


used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law in
the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name
associated with the work. You can easily comply with the terms of
this agreement by keeping this work in the same format with its
attached full Project Gutenberg™ License when you share it without
charge with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.

1.E. Unless you have removed all references to Project Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears, or
with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is derived


from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is posted


with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning of
this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute this


electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1 with
active links or immediate access to the full terms of the Project
Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or expense
to the user, provide a copy, a means of exporting a copy, or a means
of obtaining a copy upon request, of the work in its original “Plain
Vanilla ASCII” or other form. Any alternate format must include the
full Project Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or providing


access to or distributing Project Gutenberg™ electronic works
provided that:

• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt that
s/he does not agree to the terms of the full Project Gutenberg™
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and
discontinue all use of and all access to other copies of Project
Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™


electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in paragraph
1.F.3, the Project Gutenberg Literary Archive Foundation, the owner
of the Project Gutenberg™ trademark, and any other party
distributing a Project Gutenberg™ electronic work under this
agreement, disclaim all liability to you for damages, costs and
expenses, including legal fees. YOU AGREE THAT YOU HAVE NO
REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF
WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE
PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE
FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of receiving it,
you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or entity
that provided you with the defective work may elect to provide a
replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.

1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the
Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and distribution
of Project Gutenberg™ electronic works, harmless from all liability,
costs and expenses, including legal fees, that arise directly or
indirectly from any of the following which you do or cause to occur:
(a) distribution of this or any Project Gutenberg™ work, (b)
alteration, modification, or additions or deletions to any Project
Gutenberg™ work, and (c) any Defect you cause.

Section 2. Information about the Mission of


Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.

The Foundation’s business office is located at 809 North 1500 West,


Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many small
donations ($1 to $5,000) are particularly important to maintaining tax
exempt status with the IRS.

The Foundation is committed to complying with the laws regulating


charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states where


we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot make


any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.

Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.

Section 5. General Information About Project


Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.

Project Gutenberg™ eBooks are often created from several printed


editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.

You might also like