Proceedings of ELM 2018 Jiuwen Cao Download PDF
Proceedings of ELM 2018 Jiuwen Cao Download PDF
OR CLICK LINK
https://textbookfull.com/product/proceedings-of-
elm-2018-jiuwen-cao/
Read with Our Free App Audiobook Free Format PFD EBook, Ebooks dowload PDF
with Andible trial, Real book, online, KINDLE , Download[PDF] and Read and Read
Read book Format PDF Ebook, Dowload online, Read book Format PDF Ebook,
[PDF] and Real ONLINE Dowload [PDF] and Real ONLINE
More products digital (pdf, epub, mobi) instant
download maybe you interests ...
https://textbookfull.com/product/proceedings-of-elm2019-jiuwen-
cao/
https://textbookfull.com/product/elm-in-action-1st-edition-
richard-feldman/
https://textbookfull.com/product/proceedings-of-international-
ethical-hacking-conference-2018-ehacon-2018-kolkata-india-mohuya-
chakraborty/
https://textbookfull.com/product/environmental-geotechnology-
proceedings-of-egrwse-2018-arvind-kumar-agnihotri/
Proceedings of Italian Concrete Days 2018 Marco Di
Prisco
https://textbookfull.com/product/proceedings-of-italian-concrete-
days-2018-marco-di-prisco/
https://textbookfull.com/product/foundations-of-average-cost-
nonhomogeneous-controlled-markov-chains-xi-ren-cao/
https://textbookfull.com/product/proceedings-of-international-
conference-on-computational-intelligence-and-data-engineering-
proceedings-of-iccide-2018-nabendu-chaki/
https://textbookfull.com/product/data-analytics-and-learning-
proceedings-of-dal-2018-p-nagabhushan/
https://textbookfull.com/product/iccce-2018-proceedings-of-the-
international-conference-on-communications-and-cyber-physical-
engineering-2018-amit-kumar/
Proceedings in Adaptation, Learning and Optimization 11
Jiuwen Cao
Chi Man Vong
Yoan Miche
Amaury Lendasse Editors
Proceedings
of ELM 2018
Proceedings in Adaptation, Learning
and Optimization
Volume 11
Series Editors
Meng-Hiot Lim, Nanyang Technological University, Singapore, Singapore
Yew Soon Ong, Nanyang Technological University, Singapore, Singapore
The role of adaptation, learning and optimization are becoming increasingly
essential and intertwined. The capability of a system to adapt either through
modification of its physiological structure or via some revalidation process of
internal mechanisms that directly dictate the response or behavior is crucial in many
real world applications. Optimization lies at the heart of most machine learning
approaches while learning and optimization are two primary means to effect
adaptation in various forms. They usually involve computational processes
incorporated within the system that trigger parametric updating and knowledge
or model enhancement, giving rise to progressive improvement. This book series
serves as a channel to consolidate work related to topics linked to adaptation,
learning and optimization in systems and structures. Topics covered under this
series include:
• complex adaptive systems including evolutionary computation, memetic com-
puting, swarm intelligence, neural networks, fuzzy systems, tabu search, sim-
ulated annealing, etc.
• machine learning, data mining & mathematical programming
• hybridization of techniques that span across artificial intelligence and compu-
tational intelligence for synergistic alliance of strategies for problem-solving
• aspects of adaptation in robotics
• agent-based computing
• autonomic/pervasive computing
• dynamic optimization/learning in noisy and uncertain environment
• systemic alliance of stochastic and conventional search techniques
• all aspects of adaptations in man-machine systems.
This book series bridges the dichotomy of modern and conventional mathematical
and heuristic/meta-heuristics approaches to bring about effective adaptation,
learning and optimization. It propels the maxim that the old and the new can
come together and be combined synergistically to scale new heights in problem-
solving. To reach such a level, numerous research issues will emerge and
researchers will find the book series a convenient medium to track the progresses
made.
** Indexing: The books of this series are submitted to ISI Proceedings, DBLP,
Google Scholar and Springerlink **
Editors
123
Editors
Jiuwen Cao Chi Man Vong
Institute of Information and Control Department of Computer and Information
Hangzhou Dianzi University Science
Xiasha, Hangzhou, China University of Macau
Taipa, Macao
Yoan Miche
Nokia Bell Labs Amaury Lendasse
Cybersecurity Research Department of Information and Logistics
Espoo, Finland Technology
University of Houston
Houston, TX, USA
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Contents
v
vi Contents
1 Introduction
The training mechanism of traditional single hidden layer feed-forward neural networks
(SLFN) is that the input weights and hidden bias are randomly assigned initial values
and then iteratively tuned with methods such as gradient descent until the residual error
reaches the expected value. This method has several notorious drawbacks such as slow
convergence rate and local minima problem.
Different from traditional SLFN, neural networks with random weights (NNRW)
train models in a non-iterative way [1, 2]. In NNRW, the input weights and hidden bias
are randomly generated from a given range and kept fixed throughout the training
process, while the output weights are obtained by solving a linear system of matrix
equations. Compared with traditional SLFN, NNRW can learn faster with acceptable
accuracy.
Extreme learning machine (ELM) is a typical NNRW, which was proposed by
Huang et al. in 2004 [3]. ELM inherits the advantages of NNRW and extends it to a
unified form. In recent years, many ELM based algorithms have been proposed [4–6]
and applied to various fields such as unsupervised learning [7] and traffic sign
recognition [8]. Although ELM and its variants have achieved many interesting results,
there are still several important problems that have not been solved thoroughly, one of
which is the determination of the number of hidden nodes [1, 9].
In recent years, many algorithms have been proposed to determine the number of
hidden nodes. We can group them into two categories: incremental and pruning
strategies. For incremental strategy, the model begins with a small initial network and
then gradually adds new hidden nodes until the desired accuracy is achieved. Some
notable incremental algorithms include I-ELM [10], EI-ELM [11], B-ELM [12],
EB-ELM [13], etc. For pruning strategy, the model begins with a larger than necessary
network and then cuts off the redundant or less effective hidden nodes. Some notable
destructive algorithms include P-ELM [14], OP-ELM [15], etc.
This paper focuses on optimizing the performance of the existing B-ELM algo-
rithm. In B-ELM [12], the authors divided the learning process into two parts: the odd
and the even learning steps. At the odd learning steps, the new hidden node is gen-
erated randomly at one time, while at the even learning steps, the new hidden node is
determined by a formula defined by the former added node parameters. Compared with
the fully random incremental algorithms such as I-ELM and EI-ELM, B-ELM shows a
much faster convergence rate.
From the above analysis, we can infer that the hidden nodes generated at the odd
learning steps (the odd-hidden nodes) play an important role in B-ELM models.
However, the quality of the odd-hidden nodes cannot be guaranteed. Actually, some of
them may play a minor role, which will cause a sharp rise in the network complexity.
The initial motivation of this study is to alleviate this issue.
Orthogonalization technique is one of the effective algorithms for parameter opti-
mization. Wang et al. [16] proved that the ELM model with the random orthogonal
projection has better capability of sample structure preserving (SSP). Kasun et al. [17]
stacked the ELM auto-encoders into a deep ELM architecture based on the orthogonal
weight matrix. Huang et al. [18] orthogonalized the input weights matrix when building
the local receptive fields based ELM model.
Inspired by the above works, in this study, we propose a novel random orthogonal
projection based enhanced bidirectional extreme learning machine algorithm (OEB-
ELM). In OEB-ELM, at each odd learning step, we first randomly generate K candidate
hidden nodes and orthogonalize them into orthogonal hidden nodes based on the Gram-
Schmidt orthogonalization method. Then we train it as an initial model for hidden
nodes selection. After obtaining the corresponding value of residual error reduction for
each candidate node, the one with the largest residual error reduction will be selected as
the final odd-hidden node and added to the existing network. The even-hidden nodes
are obtained in the same way as B-ELM and EB-ELM.
Random Orthogonal Projection 3
A typical network structure of ELM with single hidden layer is shown in Fig. 1. The
training mechanism of ELM is that the input weights x and hidden bias b are generated
randomly from a given range and kept fixed throughout the training process, while the
output weights b are obtained by solving a system of matrix equation.
X
L
bi gðxi xj þ bi Þ ¼ tj ; xi 2 Rn ; bi 2 R; j ¼ 1; . . .; N ð1Þ
i¼1
where gðÞ denotes the activation function, tj denotes the actual value of each sample,
and N is the size of the dataset. Equation (1) can be rewritten as
Hb ¼ T ð2Þ
0 1 2 3 2 3
gðx1 x1 þ b1 Þ ... gðxL x1 þ bL Þ bT1 t1T
where H ¼ B
@
..
.
..
.
..
.
C
A , b¼6 7
4 ... 5
6 .. 7
, T¼4 . 5 .
gðx1 xN þ b1 Þ gðxL xN þ bL Þ NL bTL Lm
tNT Nm
In Eq. (2), H represents the hidden layer output matrix of ELM and the output
weight b can be obtained by
b ¼ HþT ð3Þ
en ¼ f n f ð4Þ
In the I-ELM algorithm [10], the random hidden nodes are added to the hidden
layer one by one and the parameters of the existing hidden nodes stay the same after a
new hidden node is added. The output function fn at the nth step can be expressed by
where bn denotes the output weights between the new added hidden node and the
output nodes, and Gn ðxÞ is the corresponding output of the hidden node.
The I-ELM can automatically generate the network structure; however, the network
structure is often very complex because some of the hidden nodes play a minor role in
the network. To alleviate this issue, the EI-ELM [11] and B-ELM [12] algorithms were
proposed. The core idea of the EI-ELM algorithm is to generate K candidate hidden
nodes at each learning step and only select the one with the smallest residual error.
Actually, the I-ELM is a special case of the EI-ELM when K ¼ 1.
Random Orthogonal Projection 5
Different from the EI-ELM, the B-ELM divides the training process into two parts:
the odd and the even learning steps. At each odd learning step (i.e., when the number of
hidden nodes L 2 f2n þ 1; n 2 Zg), the new hidden node is generated randomly as in
the I-ELM. At each even learning step (i.e., when the number of hidden nodes
L 2 f2n; n 2 Zg), the parameters of the new hidden node are obtained by
where g1 and u1 denote the inverse functions of g and u, respectively.
From the training mechanism of the B-ELM mentioned above, we can infer that the
hidden nodes generated at the odd learning steps have a significant impact on the model
performance. However, the B-ELM cannot guarantee the quality of these hidden nodes.
The odd-hidden nodes that play a minor role in the network will cause a sharp rise in
the network complexity.
To avoid this issue, we proposed the enhanced random search method to optimize
the odd learning steps of B-ELM, that is, the EB-ELM algorithm [13]. In EB-ELM, at
each odd learning step, K candidate hidden nodes are generated and only the one with
the largest residual error reduction will be selected. EB-ELM can achieve better gen-
eralization performance than B-ELM. However, the number of candidate nodes K in
EB-ELM is assigned based on experience, which makes it difficult to balance the
computational efficiency and model performance.
OEB-ELM Algorithm:
N n
Input: A training dataset D ={( xi , ti )}i =1 ⊂ R × R , the number of
hidden nodes L , an activation function G , a maximum num-
ber of hidden nodes Lmax and an expected error ε .
Output: The model structure and output weights matrix β .
Step 1 Initialization:
Let the number of hidden nodes L = 0 and the residual er-
ror be E = T . Set K = d , where K denotes the maximum num-
ber of trials of assigning candidate nodes at each odd
learning step, d denotes the number of data attributes.
Step 2 Learning step:
while L < Lmax and E > ε do
(a) Increase the number of hidden nodes L : L = L + 1 ;
(b) If L ∈ {2n + 1, n ∈ Z } then
Generate randomly the input weights matrix
Wrandom = [ω(1) , ω(2) , ⋅⋅⋅, ω( j ) ]K *K and the random hidden bias matrix
Brandom = [b(1) , b(2) , ⋅⋅⋅, b( j ) ]K *1 ;
Orthogonalize Wrandom and Brandom using the Gram-Schmidt or-
thogonalization method to obtain Worth and Borth , which sat-
T T
isfy Worth Worth = I and Borth Borth = 1 , respectively.
Calculate the temporal output weights β temp according to
β temp = H temp
+
T.
for j = 1 : K
Calculate the residual error E( j ) after pruning the
jth hidden node:
E( j ) = T − H residual ⋅ β residual
End for
*
(c) Let j = { j | max 1≤ j ≤ k || E ( j)
||} . Set ω L = ωorth ( j ) , and bL = b( j ) .
* *
Update H L for the new hidden node and calculate the re-
sidual error E after adding the new hidden node Lth :
E = E − H LβL .
End if
(d) if L ∈ {2n, n ∈ Z } then
Calculate the error feedback function sequence H L ac-
cording to the equation
−1
H 2 n = e2 n −1 ⋅ ( β 2 n −1 )
Calculate the parameter pair (ω L , bL ) and update H L based
on the equations (6), (7), and (8).
Calculate the output weight β L according to
〈 e2 n −1 , H 2 n 〉
β2n = 2
H 2n
Calculate E after adding the new hidden node Lth :
E = E − H LβL .
End if
End while
Random Orthogonal Projection 7
In this section, we present the details of our experiment settings and results. Our
experiments are conducted on six benchmark regression problems from UCI machine
learning repository [19] and the specification of these datasets is given in Table 1. We
chose the Sigmoid function (i.e., Gðx; x; bÞ ¼ 1=ð1 þ expððxx þ bÞÞÞ) as the acti-
vation function of B-ELM, EB-ELM, EI-ELM, and OEB-ELM. The input weights x
are randomly generated from the range of (−1, 1) and the hidden biases b are generated
randomly from the range of (0, 1) using a uniform sampling distribution. For each
regression problem, the average results over 50 trials are obtained for each algorithm.
In this study, we did our experiments in the MATLAB R2014a environment on the
same Windows 10 with Intel Core i5 2.3 GHz CPU and 8 GB RAM.
From the above analysis, it can be seen that the proposed OEB-ELM algorithm
achieves models with better generalization performance and stability than the B-ELM
and EB-ELM algorithms. Compared with the EI-ELM algorithm, the OEB-ELM has
higher computational efficiency and can achieve better generalization performance and
stability in most cases.
For Question (2), we gradually increase the number of hidden nodes from 1 to 50
and record the corresponding testing RMSE in the process of adding the hidden nodes.
The performance comparison of each algorithm with the increase of hidden nodes is
shown in Fig. 2.
Figure 2 shows the changes of testing RMSE of the four algorithms with increasing
hidden nodes on Housing dataset. From Fig. 2, we can observe that the B-ELM,
EB-ELM, and OEB-ELM algorithms achieve high accuracy with a few hidden nodes,
which means that the three algorithms converge faster than the EI-ELM algorithm. We
also observe that the OEB-ELM algorithm has smaller testing RMSE and fluctuation
Random Orthogonal Projection 9
Fig. 2. The testing RMSE updating curves of the EB-ELM, B-ELM, EI-ELM, and OEB-ELM
than other algorithms, which means that the OEB-ELM algorithm achieves models
with better generalization performance and stability than the B-ELM, EB-ELM, and
EI-ELM algorithms. Similar results can be found in other cases.
5 Conclusions
Acknowledgment. This research was supported by the National Natural Science Foundation of
China (61672358).
10 W. Cao et al.
References
1. Cao, W.P., Wang, X.Z., Ming, Z., Gao, J.Z.: A review on neural networks with random
weights. Neurocomputing 275, 278–287 (2018)
2. Cao, J.W., Lin, Z.P.: Extreme learning machines on high dimensional and large data
applications: a survey. Math. Prob. Eng. 2015, 1–13 (2015)
3. Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine a new learning scheme of
feedforward neural networks. In: Proceedings of 2004 IEEE international joint conference on
neural networks, vol. 2, pp. 985–990 (2004)
4. Zhang, L., Zhang, D.: Evolutionary cost-sensitive extreme learning machine. IEEE Trans.
Neural Netw. Learn. Syst. 28(12), 3045–3060 (2017)
5. Ding, S.F., Guo, L.L., Hou, Y.L.: Extreme learning machine with kernel model based on
deep learning. Neural Comput. Appl. 28(8), 1975–1984 (2017)
6. Zhang, H.G., Zhang, S., Yin, Y.X.: Online sequential ELM algorithm with forgetting factor
for real applications. Neurocomputing 261, 144–152 (2017)
7. He, Q., Jin, X., Du, C.Y., Zhuang, F.Z., Shi, Z.Z.: Clustering in extreme learning machine
feature space. Neurocomputing 128, 88–95 (2014)
8. Huang, Z., Yu, Y., Gu, J., Liu, H.: An efficient method for traffic sign recognition based on
extreme learning machine. IEEE Trans. Cybern. 47(4), 920–933 (2017)
9. Cao, W.P., Gao, J.Z., Ming, Z., Cai, S.B.: Some tricks in parameter selection for extreme
learning machine. IOP Conf. Ser. Mater. Sci. Eng. 261(1), 012002 (2017)
10. Huang, G.B., Chen, L., Siew, C.K.: Universal approximation using incremental constructive
feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 17(4), 879–892
(2006)
11. Huang, G.B., Chen, L.: Enhanced random search based incremental extreme learning
machine. Neurocomputing 71(16–18), 3460–3468 (2008)
12. Yang, Y.M., Wang, Y.N., Yuan, X.F.: Bidirectional extreme learning machine for regression
problem and its learning effectiveness. IEEE Trans. Neural Netw. Learn. Syst. 23(9),
1498–1505 (2012)
13. Cao, W.P., Ming, Z., Wang, X.Z., Cai, S.B.: Improved bidirectional extreme learning
machine based on enhanced random search. Memetic Comput., 1–8 (2017). https://doi.org/
10.1007/s12293-017-0238-1
14. Rong, H.J., Ong, Y.S., Tan, A.H., Zhu, Z.: A fast pruned-extreme learning machine for
classification problem. Neurocomputing 72, 359–366 (2008)
15. Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: OPELM: optimally
pruned extreme learning machine. IEEE Trans. Neural Netw. 21(1), 158–162 (2010)
16. Wang, W.H., Liu, X.Y.: The selection of input weights of extreme learning machine: a
sample structure preserving point of view. Neurocomputing 261, 28–36 (2017)
17. Kasun, L.L.C., Zhou, H., Huang, G.B., Vong, C.M.: Representational learning with extreme
learning machine for big data. IEEE Intell. Syst. 28, 31–34 (2013)
18. Huang, G.B., Bai, Z., Kaun, L.L.C., Vong, C.M.: Local receptive fields based extreme
learning machine. IEEE Comput. Intell. Mag. 10, 18–29 (2015)
19. Blake, C., Merz, C.: UCI repository of machine learning databases. Technical report, Dept.
Inf. Comput. Sci., Univ. California, Irvine, CA, USA (1998). http://archive.ics.uci.edu/ml/
A Novel Feature Specificity Enhancement
for Taste Recognition by Electronic Tongue
1 Introduction
2 Methods
Notations. In this paper, X ¼ ½x1; x2 ; . . .; xm T 2 Rmd represents the raw data of certain
sample, where m represents the sample dimension equivalent to the number of sensors
and d represents the value number of each sensor response.
where xni represents the i-th sensor response to the n-th sample, Zijn indicates the n-th
sample feature between i-th and j-th sensor responses, jðÞ denotes a kernel function
projects original specificity component to a nonlinear space. Moreover, we introduce
kernel function to solve “dimension disaster” problem in space projection [19] as
follows:
x i x j 2
jðxi ; xj Þ ¼ expð 2
Þ ð2Þ
2r2
where expðÞ represents an exponential function, r is the width of the kernel function
and kk2 denotes the l2-norm.
In recognition stage, extreme learning machines (ELM) module [20] is a favorable
choice. It randomly initializes the input weights W ¼ ½w1 ; w2 ; . . .; wL T 2 RLD and bias
b ¼ ½b1 ; b2 ; . . .; bL 2 RL , and then the corresponding output weight matrix b 2 RLC
can be analytically calculated based on the output matrix of the hidden layer. The
output matrix H of hidden layer with L hidden neurons is computed as:
2 T 3
g w 1 x 1 þ b1 g wTL x1 þ bL
6 .. .. .. 7
H¼4 5 ð3Þ
T . . T .
g w 1 xN þ b 1 g wL xN þ bL
A Novel Feature Specificity Enhancement for Taste Recognition 13
where gðÞ is the activation function. If regularization is applied, the ELM learning
model can be expressed as follows:
1 1X N
min kbk2 þ l knk2
b 2 2 i¼1 ð4Þ
s:t hðxi Þb ¼ ti ni ; i ¼ 1; 2; . . .; N
where T ¼ ½t1 ; t2 ; . . .; tN T 2 RNC denotes the label matrix of training set, N is the
number of training samples. Therefore, the output of ELM can be computed as:
INN 1
f ð xÞ ¼ hð xÞHT ðHT H þ Þ T ð6Þ
l
3 Experiments
feature extraction methods, support vector machine (SVM), random forest (RF) and
KELM are implemented as recognition part for evaluation. In this section, we
use leave-one-out (LOO) strategy for cross validation. The average accuracies of cross
validation are reported in Table 1 and the total computation time for cross-validation
training and testing is presented in Table 2. From both Tables 1 and 2, we can have the
following observations:
(1) When SVM is used for classification, the proposed FSE with SVM performs
significantly better than other feature extraction methods. FSE with SVM can
achieve 90.48%. In the view of execution time, FSE reaches the shortest time
expense using SVM than other feature extraction methods.
(2) When RF is used for classification, both FSE and “raw” methods get the highest
average accuracy (82.54%) compared with other feature extraction methods. From
time consumption of computation, FSE is nearly 90 times faster than Raw.
(3) When KELM is adopted, FSE gets the highest accuracy 95.24%. Compared with
raw feature (22.22%), PCA (69.84%) and DWT (88.89%), it is obvious that
KELM shows better fitting and reasoning ability by using proposed FSE feature
extraction method. Moreover, the specificity metric with Hilbert projection is
more favorable to KELM than any other classifiers. As for time consumption, FSE
coupled with KELM cost the least time expense in all methods. It indicates that
KELM keeps the minimum amount of computation while providing excellent
classification results.
4 Conclusion
In this article, we proposed a FSE method for nonlinear feature extraction in E-Tongue
data and achieves taste recognition by using several typical classifiers such as SVM, RF
and KELM. The proposed FSE coupled with KELM achieves the best results in both
accuracy and computational efficiency on our collected data set by a self-developed
A Novel Feature Specificity Enhancement for Taste Recognition 15
E-Tongue system. We should admit that FSE seems to be effective in dealing with
feature extraction from high dimensional data, especially LAPV signals. On the other
hand, KELM can greatly promote the overall performance in accuracy and speed in
recognition.
References
1. Legin, A., Rudnitskaya, A., Lvova, L., Di Nataleb, C., D’Amicob, A.: Evaluation of Italian
wine by the electronic tongue: recognition, quantitative analysis and correlation with human
sensory perception. Anal. Chim. Acta 484(1), 33–44 (2003)
2. Ghosh, A., Bag, A.K., Sharma, P., et al.: Monitoring the fermentation process and detection
of optimum fermentation time of black tea using an electronic tongue. IEEE Sen-
sors J. 15(11), 6255–6262 (2015)
3. Verrelli, G., Lvova, L., Paolesse, R., et al.: Metalloporphyrin - based electronic tongue: an
application for the analysis of Italian white wines. Sensors 7(11), 2750–2762 (2007)
4. Tahara, Y., Toko, K.: Electronic tongues–a review. IEEE Sensors J. 13(8), 3001–3011
(2013)
5. Kirsanov, D., Legin, E., Zagrebin, A., et al.: Mimicking Daphnia magna, bioassay
performance by an electronic tongue for urban water quality control. Anal. Chim. Acta
824, 64–70 (2014)
6. Wei, Z., Wang, J.: Tracing floral and geographical origins of honeys by potentiometric and
voltammetric electronic tongue. Comput. Electron. Agric. 108, 112–122 (2014)
7. Wang, L., Niu, Q., Hui, Y., Jin, H.: Discrimination of rice with different pretreatment
methods by using a voltammetric electronic tongue. Sensors 15(7), 17767–17785 (2015)
8. Apetrei, I.M., Apetrei, C.: Application of voltammetric e-tongue for the detection of
ammonia and putrescine in beef products. Sens. Actuators B Chem. 234, 371–379 (2016)
9. Ciosek, P., Brzózka, Z., Wróblewski, W.: Classification of beverages using a reduced sensor
array. Sens. Actuators B Chem. 103(1), 76–83 (2004)
10. Domínguez, R.B., Morenobarón, L., Muñoz, R., et al.: Voltammetric electronic tongue and
support vector machines for identification of selected features in Mexican coffee. Sensors
14(9), 17770–17785 (2014)
11. Palit, M., Tudu, B., Bhattacharyya, N., et al.: Comparison of multivariate preprocessing
techniques as applied to electronic tongue based pattern classification for black tea. Anal.
Chim. Acta 675(1), 8–15 (2010)
12. Gutiérrez, M., Llobera, A., Ipatov, A., et al.: Application of an E-tongue to the analysis of
monovarietal and blends of white wines. Sensors 11(5), 4840–4857 (2011)
13. Dias, L.A., et al.: An electronic tongue taste evaluation: identification of goat milk
adulteration with bovine milk. Sens. Actuators B Chem. 136(1), 209–217 (2009)
14. Ciosek, P., Maminska, R., Dybko, A., et al.: Potentiometric electronic tongue based on
integrated array of microelectrodes. Sens. Actuators B Chem. 127(1), 8–14 (2007)
15. Ivarsson, P., et al.: Discrimination of tea by means of a voltammetric electronic tongue and
different applied waveforms. Sens. Actuators B Chem. 76(1), 449–454 (2001)
16. Winquist, F., Wide, P., Lundström, I.: An electronic tongue based on voltammetry. Anal.
Chim. Acta 357(1–2), 21–31 (1997)
17. Tian, S.Y., Deng, S.P., Chen, Z.X.: Multifrequency large amplitude pulse voltammetry:
a novel electrochemical method for electronic tongue. Sens. Actuators B Chem. 123(2),
1049–1056 (2007)
16 Y. Chen et al.
18. Palit, M., Tudu, B., Dutta, P.K., et al.: Classification of black tea taste and correlation with
tea taster’s mark using voltammetric electronic tongue. IEEE Trans. Instrum. Meas. 59(8),
2230–2239 (2010)
19. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)
20. Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: Theory and applications.
Neurocomputing 70(1), 489–501 (2006)
21. Huang, G.B., Zhou, H., Ding, X., et al.: Extreme learning machine for regression and
multiclass classification. IEEE Trans. Syst. Man Cybern. Part B 42(2), 513–529 (2012)
22. Liu, T., Chen, Y., Li, D., et al.: An active feature selection strategy for DWT in artificial
taste. J. Sens. 2018, 1–11 (2018)
Another random document with
no related content on Scribd:
I've been as bald as an onion."
"Sure," drawled McDowell. "The jury will decide." He turned to
O'Toole. "Are you a doctor?"
"I am not a licensed Doctor of Medicine."
"We'll see if what you are doing can be turned into a charge of
practicing with no license."
"I'm not practicing medicine. I'm a follicologist."
"Yeah? Then what's this feather-business all about?"
"Simple. Evolution has caused every genus, every specimen of life
to pass upward from the sea. Hair is evolved from scales and
feathers evolved also from scales.
"Now," continued O'Toole, "baldness is attributed to lack of
nourishment for the hair on the scalp. It dies. The same thing often
occurs in agriculture—"
"What has farming to do with hair-growing?" demanded McDowell.
"I was coming to that. When wheat will grow no longer in a field, they
plant it with corn. It is called 'Rotation of Crops.' Similarly, I cause a
change in the growth-output of the scalp. It starts off with a light
covering of scales, evolves into feathers in a few days, and the
feathers evolve to completion. This takes seven weeks. After this
time, the feathers die because of the differences in evolutionary
ending of the host. Then, with the scalp renewed by the so-called
Rotation of Crops."
"Uh-huh. Well, we'll let the jury decide!"
Two months elapsed before O'Toole came to trial. But meantime, the
judge took a vacation and returned with a luxuriant growth of hair on
his head. The jury was not cited for contempt of court even though
most of them insisted on keeping their hats on during proceedings.
O'Toole had a good lawyer.
And Judge Murphy beamed down over the bench and said: "O'Toole,
you are guilty, but sentence is suspended indefinitely. Just don't get
into trouble again, that's all. And gentlemen, Lieutenant McDowell,
Dr. Muldoon, and Sergeant O'Leary, I commend all of your work and
will direct that you, Mr. McCarthy, be recompensed. As for you," he
said to the ex-featherhead. "Mr. William B. Windsor, we have no use
for foreigners—"
Mr. Windsor never got a chance to state that he was no foreigner; his
mother was a Clancy.
THE END.
*** END OF THE PROJECT GUTENBERG EBOOK ALIEN ***
Updated editions will replace the previous one—the old editions will
be renamed.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.