Complementary feature splits for co-training

A Salaheldin, N El-Gayar - 2012 11th International Conference …, 2012 - ieeexplore.ieee.org
A Salaheldin, N El-Gayar
2012 11th International Conference on Information Science, Signal …, 2012ieeexplore.ieee.org
In many data mining and machine learning applications, data may be easy to collect.
However, labeling the data is often expensive, time consuming or difficult. Such applications
give rise to semi-supervised learning techniques that combine the use of labelled and
unlabelled data. Co-training is a popular semi-supervised learning algorithm that depends
on splitting the features of a data set into two redundant and independent views. In many
cases however such sets of features are not naturally present in the data or are unknown. In …
In many data mining and machine learning applications, data may be easy to collect. However, labeling the data is often expensive, time consuming or difficult. Such applications give rise to semi-supervised learning techniques that combine the use of labelled and unlabelled data. Co-training is a popular semi-supervised learning algorithm that depends on splitting the features of a data set into two redundant and independent views. In many cases however such sets of features are not naturally present in the data or are unknown. In this paper we test feature splitting methods based on maximizing the confidence and the diversity of the views using genetic algorithms, and compare their performance against random splits. We also propose a new criterion that maximizes the complementary nature of the views. Experimental results on six different data sets show that our optimized splits enhance the performance of co-training over random splits and that the complementary split outperforms the confidence, diversity and random splits.
ieeexplore.ieee.org
Showing the best result for this search. See all results