Abstract—Feature subset selection is a technique for reducing the attribute space of a feature set. In other words, it is identifying a subset of features by removing irrelevant or redundant features. A good feature set that contains highly correlated features with the class improves not only the efficiency of the classification algorithms but also the classification accuracy. A novel metric that integrates the correlation and reliability information between each feature and each class obtained from multiple correspondence analysis (MCA) is currently the popular solution to score the features for feature selection. However, it has the disadvantage that p-value which examines the reliability is conventional confidence interval. In this paper, modified multiple correspondence analysis (M-MCA) is used to improve the reliability. The efficiency and effectiveness of proposed method is demonstrated through extensive comparisons with MCA using five benchmark datasets provided by WEKA and UCI repository. Naïve bayes, decision tree and jrip are used as the classifiers. The classification results, in terms of classification accuracy and size of feature subspace, show that the proposed Modified-MCA outperforms three other feature selection methods, MCA, information gain, and relief.
Index Terms—Feature selection, correlation, reliability, P-value, confidence interval.
Authors are with University of Computer Studies, Yangon, Myanmar(e-mail: myokhaing.ucsy@ gmail.com, moonkhamucsy@ gmail.com).
Cite: Myo Khaing and Nang Saing Moon Kham, "Modified-MCA Based Feature Selection Model for Preprocessing Step of Classification," International Journal of Information and Education Technology vol. 1, no. 5, pp. 392-397, 2011.