An Extensive Evaluation of Filtering Misclassified Instances in Supervised Classification Tasks

Furkejuvvon:
Bibliográfalaš dieđut
Publikašuvnnas:arXiv.org (Dec 13, 2013), p. n/a
Váldodahkki: Smith, Michael R
Eará dahkkit: Martinez, Tony
Almmustuhtton:
Cornell University Library, arXiv.org
Fáttát:
Liŋkkat:Citation/Abstract
Full text outside of ProQuest
Fáddágilkorat: Lasit fáddágilkoriid
Eai fáddágilkorat, Lasit vuosttaš fáddágilkora!

MARC

LEADER 00000nab a2200000uu 4500
001 2085807521
003 UK-CbPIL
022 |a 2331-8422 
035 |a 2085807521 
045 0 |b d20131213 
100 1 |a Smith, Michael R 
245 1 |a An Extensive Evaluation of Filtering Misclassified Instances in Supervised Classification Tasks 
260 |b Cornell University Library, arXiv.org  |c Dec 13, 2013 
513 |a Working Paper 
520 3 |a Removing or filtering outliers and mislabeled instances prior to training a learning algorithm has been shown to increase classification accuracy. A popular approach for handling outliers and mislabeled instances is to remove any instance that is misclassified by a learning algorithm. However, an examination of which learning algorithms to use for filtering as well as their effects on multiple learning algorithms over a large set of data sets has not been done. Previous work has generally been limited due to the large computational requirements to run such an experiment, and, thus, the examination has generally been limited to learning algorithms that are computationally inexpensive and using a small number of data sets. In this paper, we examine 9 learning algorithms as filtering algorithms as well as examining the effects of filtering in the 9 chosen learning algorithms on a set of 54 data sets. In addition to using each learning algorithm individually as a filter, we also use the set of learning algorithms as an ensemble filter and use an adaptive algorithm that selects a subset of the learning algorithms for filtering for a specific task and learning algorithm. We find that for most cases, using an ensemble of learning algorithms for filtering produces the greatest increase in classification accuracy. We also compare filtering with a majority voting ensemble. The voting ensemble significantly outperforms filtering unless there are high amounts of noise present in the data set. Additionally, we find that a majority voting ensemble is robust to noise as filtering with a voting ensemble does not increase the classification accuracy of the voting ensemble. 
653 |a Accuracy 
653 |a Datasets 
653 |a Algorithms 
653 |a Classification 
653 |a Outliers (statistics) 
653 |a Machine learning 
653 |a Adaptive filters 
653 |a Adaptive algorithms 
700 1 |a Martinez, Tony 
773 0 |t arXiv.org  |g (Dec 13, 2013), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2085807521/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/1312.3970