Many theoretical and experimental studies have shown that a multiple classi?er system is an e?ective technique for reducing prediction errors [9,10,11,20,19]. These studies identify mainly three elements that characterize a set of cl- si?ers: -Therepresentationoftheinput(whateachindividualclassi?erreceivesby wayofinput). -Thearchitectureoftheindividualclassi?ers(algorithmsandparametri- tion). - The way to cause these classi?ers to take a decision together. Itcanbeassumedthatacombinationmethodise?cientifeachindividualcl- si?ermakeserrors’inadi?erentway’,sothatitcanbeexpectedthatmostofthe classi?ers can correct the mistakes that an individual one does [1,19]. The term ’weak classi?ers’ refers to classi?ers whose capacity has been reduced in some way so as to increase their prediction diversity. Either their internal architecture issimple(e.g.,theyusemono-layerperceptronsinsteadofmoresophisticated neural networks), or they are prevented from using all the information available. Sinceeachclassi?erseesdi?erentsectionsofthelearningset,theerrorcorre- tion among them is reduced. It has been shown that the majority vote is the beststrategyiftheerrorsamongtheclassi?ersarenotcorrelated.Moreover, in real applications, the majority vote also appears to be as e?cient as more sophisticated decision rules [2,13]. Onemethodofgeneratingadiversesetofclassi?ersistoupsetsomeaspect ofthetraininginputofwhichtheclassi?erisrather unstable. In the present paper,westudytwodistinctwaystocreatesuchweakenedclassi?ers;i.e.learning set resampling (using the ’Bagging’ approach [5]), and random feature subset selection (using ’MFS’, a Multiple Feature Subsets approach [3]). Other recent and similar techniques are not discussed here but are also based on modi?cations to the training and/or the feature set [7,8,12,21].
"Sinopsis" puede pertenecer a otra edición de este libro.
Many theoretical and experimental studies have shown that a multiple classi?er system is an e?ective technique for reducing prediction errors [9,10,11,20,19]. These studies identify mainly three elements that characterize a set of cl- si?ers: -Therepresentationoftheinput(whateachindividualclassi?erreceivesby wayofinput). -Thearchitectureoftheindividualclassi?ers(algorithmsandparametri- tion). - The way to cause these classi?ers to take a decision together. Itcanbeassumedthatacombinationmethodise?cientifeachindividualcl- si?ermakeserrors'inadi?erentway',sothatitcanbeexpectedthatmostofthe classi?ers can correct the mistakes that an individual one does [1,19]. The term 'weak classi?ers' refers to classi?ers whose capacity has been reduced in some way so as to increase their prediction diversity. Either their internal architecture issimple(e.g.,theyusemono-layerperceptronsinsteadofmoresophisticated neural networks), or they are prevented from using all the information available. Sinceeachclassi?erseesdi?erentsectionsofthelearningset,theerrorcorre- tion among them is reduced. It has been shown that the majority vote is the beststrategyiftheerrorsamongtheclassi?ersarenotcorrelated.Moreover, in real applications, the majority vote also appears to be as e?cient as more sophisticated decision rules [2,13]. Onemethodofgeneratingadiversesetofclassi?ersistoupsetsomeaspect ofthetraininginputofwhichtheclassi?erisrather unstable. In the present paper,westudytwodistinctwaystocreatesuchweakenedclassi?ers;i.e.learning set resampling (using the 'Bagging' approach [5]), and random feature subset selection (using 'MFS', a Multiple Feature Subsets approach [3]). Other recent and similar techniques are not discussed here but are also based on modi?cations to the training and/or the feature set [7,8,12,21].
This book constitutes the refereed proceedings of the First International Workshop on Multiple Classifier Systems, MCS 2000, held in Cagliari, Italy in June 2000.
The 33 revised full papers presented together with five invited papers were carefully reviewed and selected for inclusion in the book. The papers are organized in topical sections on theoretical issues, multiple classifier fusion, bagging and boosting, design of multiple classifier systems, applications of multiple classifier systems, document analysis, and miscellaneous applications.
"Sobre este título" puede pertenecer a otra edición de este libro.
Librería: Ria Christie Collections, Uxbridge, Reino Unido
Condición: New. In. Nº de ref. del artículo: ria9783540677048_new
Cantidad disponible: Más de 20 disponibles
Librería: Chiron Media, Wallingford, Reino Unido
Paperback. Condición: New. Nº de ref. del artículo: 6666-IUK-9783540677048
Cantidad disponible: Más de 20 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: New. Nº de ref. del artículo: 919229-n
Cantidad disponible: Más de 20 disponibles
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
Taschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Many theoretical and experimental studies have shown that a multiple classi er system is an e ective technique for reducing prediction errors [9,10,11,20,19]. These studies identify mainly three elements that characterize a set of cl- si ers: Therepresentationoftheinput(whateachindividualclassi erreceivesby wayofinput). Thearchitectureoftheindividualclassi ers(algorithmsandparametri- tion). The way to cause these classi ers to take a decision together. Itcanbeassumedthatacombinationmethodise cientifeachindividualcl- si ermakeserrors inadi erentway ,sothatitcanbeexpectedthatmostofthe classi ers can correct the mistakes that an individual one does [1,19]. The term weak classi ers refers to classi ers whose capacity has been reduced in some way so as to increase their prediction diversity. Either their internal architecture issimple(e.g.,theyusemono-layerperceptronsinsteadofmoresophisticated neural networks), or they are prevented from using all the information available. Sinceeachclassi erseesdi erentsectionsofthelearningset,theerrorcorre- tion among them is reduced. It has been shown that the majority vote is the beststrategyiftheerrorsamongtheclassi ersarenotcorrelated.Moreover, in real applications, the majority vote also appears to be as e cient as more sophisticated decision rules [2,13]. Onemethodofgeneratingadiversesetofclassi ersistoupsetsomeaspect ofthetraininginputofwhichtheclassi erisrather unstable. In the present paper,westudytwodistinctwaystocreatesuchweakenedclassi ers;i.e.learning set resampling (using the Bagging approach [5]), and random feature subset selection (using MFS , a Multiple Feature Subsets approach [3]). Other recent and similar techniques are not discussed here but are also based on modi cations to the training and/or the feature set [7,8,12,21]. 424 pp. Englisch. Nº de ref. del artículo: 9783540677048
Cantidad disponible: 2 disponibles
Librería: GreatBookPricesUK, Woodford Green, Reino Unido
Condición: New. Nº de ref. del artículo: 919229-n
Cantidad disponible: Más de 20 disponibles
Librería: Books Puddle, New York, NY, Estados Unidos de America
Condición: New. pp. 424. Nº de ref. del artículo: 263078260
Cantidad disponible: 4 disponibles
Librería: Majestic Books, Hounslow, Reino Unido
Condición: New. Print on Demand pp. 424 Illus. Nº de ref. del artículo: 5851051
Cantidad disponible: 4 disponibles
Librería: Biblios, Frankfurt am main, HESSE, Alemania
Condición: New. PRINT ON DEMAND pp. 424. Nº de ref. del artículo: 183078270
Cantidad disponible: 4 disponibles
Librería: moluna, Greven, Alemania
Condición: New. Nº de ref. del artículo: 4898350
Cantidad disponible: Más de 20 disponibles
Librería: buchversandmimpf2000, Emtmannsberg, BAYE, Alemania
Taschenbuch. Condición: Neu. This item is printed on demand - Print on Demand Titel. Neuware -Ensemble Methods in Machine Learning.- Experiments with Classifier Combining Rules.- The 'Test and Select' Approach to Ensemble Combination.- A Survey of Sequential Combination of Word Recognizers in Handwritten Phrase Recognition at CEDAR.- Multiple Classifier Combination Methodologies for Different Output Levels.- A Mathematically Rigorous Foundation for Supervised Learning.- Classifier Combinations: Implementations and Theoretical Issues.- Some Results on Weakly Accurate Base Learners for Boosting Regression and Classification.- Complexity of Classification Problems and Comparative Advantages of Combined Classifiers.- Effectiveness of Error Correcting Output Codes in Multiclass Learning Problems.- Combining Fisher Linear Discriminants for Dissimilarity Representations.- A Learning Method of Feature Selection for Rough Classification.- Analysis of a Fusion Method for Combining Marginal Classifiers.- A hybrid projection based and radial basis function architecture.- Combining Multiple Classifiers in Probabilistic Neural Networks.- Supervised Classifier Combination through Generalized Additive Multi-model.- Dynamic Classifier Selection.- Boosting in Linear Discriminant Analysis.- Different Ways of Weakening Decision Trees and Their Impact on Classification Accuracy of DT Combination.- Applying Boosting to Similarity Literals for Time Series Classification.- Boosting of Tree-Based Classifiers for Predictive Risk Modeling in GIS.- A New Evaluation Method for Expert Combination in Multi-expert System Designing.- Diversity between Neural Networks and Decision Trees for Building Multiple Classifier Systems.- Self-Organizing Decomposition of Functions.- Classifier Instability and Partitioning.- A Hierarchical Multiclassifier System for Hyperspectral Data Analysis.-Consensus Based Classification of Multisource Remote Sensing Data.- Combining Parametric and Nonparametric Classifiers for an Unsupervised Updating of Land-Cover Maps.- A Multiple Self-Organizing Map Scheme for Remote Sensing Classification.- Use of Lexicon Density in Evaluating Word Recognizers.- A Multi-expert System for Dynamic Signature Verification.- A Cascaded Multiple Expert System for Verification.- Architecture for Classifier Combination Using Entropy Measures.- Combining Fingerprint Classifiers.- Statistical Sensor Calibration for Fusion of Different Classifiers in a Biometric Person Recognition Framework.- A Modular Neuro-Fuzzy Network for Musical Instruments Classification.- Classifier Combination for Grammar-Guided Sentence Recognition.- Shape Matching and Extraction by an Array of Figure-and-Ground Classifiers.Springer-Verlag KG, Sachsenplatz 4-6, 1201 Wien 424 pp. Englisch. Nº de ref. del artículo: 9783540677048
Cantidad disponible: 1 disponibles