Many theoretical and experimental studies have shown that a multiple classi?er system is an e?ective technique for reducing prediction errors [9,10,11,20,19]. These studies identify mainly three elements that characterize a set of cl- si?ers: -Therepresentationoftheinput(whateachindividualclassi?erreceivesby wayofinput). -Thearchitectureoftheindividualclassi?ers(algorithmsandparametri- tion). - The way to cause these classi?ers to take a decision together. Itcanbeassumedthatacombinationmethodise?cientifeachindividualcl- si?ermakeserrors’inadi?erentway’,sothatitcanbeexpectedthatmostofthe classi?ers can correct the mistakes that an individual one does [1,19]. The term ’weak classi?ers’ refers to classi?ers whose capacity has been reduced in some way so as to increase their prediction diversity. Either their internal architecture issimple(e.g.,theyusemono-layerperceptronsinsteadofmoresophisticated neural networks), or they are prevented from using all the information available. Sinceeachclassi?erseesdi?erentsectionsofthelearningset,theerrorcorre- tion among them is reduced. It has been shown that the majority vote is the beststrategyiftheerrorsamongtheclassi?ersarenotcorrelated.Moreover, in real applications, the majority vote also appears to be as e?cient as more sophisticated decision rules [2,13]. Onemethodofgeneratingadiversesetofclassi?ersistoupsetsomeaspect ofthetraininginputofwhichtheclassi?erisrather unstable. In the present paper,westudytwodistinctwaystocreatesuchweakenedclassi?ers;i.e.learning set resampling (using the ’Bagging’ approach [5]), and random feature subset selection (using ’MFS’, a Multiple Feature Subsets approach [3]). Other recent and similar techniques are not discussed here but are also based on modi?cations to the training and/or the feature set [7,8,12,21].
"Sinopsis" puede pertenecer a otra edición de este libro.
Many theoretical and experimental studies have shown that a multiple classi?er system is an e?ective technique for reducing prediction errors [9,10,11,20,19]. These studies identify mainly three elements that characterize a set of cl- si?ers: -Therepresentationoftheinput(whateachindividualclassi?erreceivesby wayofinput). -Thearchitectureoftheindividualclassi?ers(algorithmsandparametri- tion). - The way to cause these classi?ers to take a decision together. Itcanbeassumedthatacombinationmethodise?cientifeachindividualcl- si?ermakeserrors'inadi?erentway',sothatitcanbeexpectedthatmostofthe classi?ers can correct the mistakes that an individual one does [1,19]. The term 'weak classi?ers' refers to classi?ers whose capacity has been reduced in some way so as to increase their prediction diversity. Either their internal architecture issimple(e.g.,theyusemono-layerperceptronsinsteadofmoresophisticated neural networks), or they are prevented from using all the information available. Sinceeachclassi?erseesdi?erentsectionsofthelearningset,theerrorcorre- tion among them is reduced. It has been shown that the majority vote is the beststrategyiftheerrorsamongtheclassi?ersarenotcorrelated.Moreover, in real applications, the majority vote also appears to be as e?cient as more sophisticated decision rules [2,13]. Onemethodofgeneratingadiversesetofclassi?ersistoupsetsomeaspect ofthetraininginputofwhichtheclassi?erisrather unstable. In the present paper,westudytwodistinctwaystocreatesuchweakenedclassi?ers;i.e.learning set resampling (using the 'Bagging' approach [5]), and random feature subset selection (using 'MFS', a Multiple Feature Subsets approach [3]). Other recent and similar techniques are not discussed here but are also based on modi?cations to the training and/or the feature set [7,8,12,21].
This book constitutes the refereed proceedings of the First International Workshop on Multiple Classifier Systems, MCS 2000, held in Cagliari, Italy in June 2000.
The 33 revised full papers presented together with five invited papers were carefully reviewed and selected for inclusion in the book. The papers are organized in topical sections on theoretical issues, multiple classifier fusion, bagging and boosting, design of multiple classifier systems, applications of multiple classifier systems, document analysis, and miscellaneous applications.
"Sobre este título" puede pertenecer a otra edición de este libro.
Librería: GuthrieBooks, Spring Branch, TX, Estados Unidos de America
Paperback. Condición: Very Good. Ex-library paperback in very nice condition with the usual markings and attachments. Nº de ref. del artículo: DA1411791
Cantidad disponible: 1 disponibles
Librería: Books Puddle, New York, NY, Estados Unidos de America
Condición: New. pp. 424. Nº de ref. del artículo: 263078260
Cantidad disponible: 1 disponibles
Librería: Majestic Books, Hounslow, Reino Unido
Condición: New. pp. 424 Illus. Nº de ref. del artículo: 5851051
Cantidad disponible: 1 disponibles
Librería: Biblios, Frankfurt am main, HESSE, Alemania
Condición: New. pp. 424. Nº de ref. del artículo: 183078270
Cantidad disponible: 1 disponibles
Librería: Lucky's Textbooks, Dallas, TX, Estados Unidos de America
Condición: New. Nº de ref. del artículo: ABLIING23Mar3113020174725
Cantidad disponible: Más de 20 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: New. Nº de ref. del artículo: 919229-n
Cantidad disponible: Más de 20 disponibles
Librería: Ria Christie Collections, Uxbridge, Reino Unido
Condición: New. In. Nº de ref. del artículo: ria9783540677048_new
Cantidad disponible: Más de 20 disponibles
Librería: Chiron Media, Wallingford, Reino Unido
Paperback. Condición: New. Nº de ref. del artículo: 6666-IUK-9783540677048
Cantidad disponible: Más de 20 disponibles
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
Taschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Many theoretical and experimental studies have shown that a multiple classi er system is an e ective technique for reducing prediction errors [9,10,11,20,19]. These studies identify mainly three elements that characterize a set of cl- si ers: Therepresentationoftheinput(whateachindividualclassi erreceivesby wayofinput). Thearchitectureoftheindividualclassi ers(algorithmsandparametri- tion). The way to cause these classi ers to take a decision together. Itcanbeassumedthatacombinationmethodise cientifeachindividualcl- si ermakeserrors inadi erentway ,sothatitcanbeexpectedthatmostofthe classi ers can correct the mistakes that an individual one does [1,19]. The term weak classi ers refers to classi ers whose capacity has been reduced in some way so as to increase their prediction diversity. Either their internal architecture issimple(e.g.,theyusemono-layerperceptronsinsteadofmoresophisticated neural networks), or they are prevented from using all the information available. Sinceeachclassi erseesdi erentsectionsofthelearningset,theerrorcorre- tion among them is reduced. It has been shown that the majority vote is the beststrategyiftheerrorsamongtheclassi ersarenotcorrelated.Moreover, in real applications, the majority vote also appears to be as e cient as more sophisticated decision rules [2,13]. Onemethodofgeneratingadiversesetofclassi ersistoupsetsomeaspect ofthetraininginputofwhichtheclassi erisrather unstable. In the present paper,westudytwodistinctwaystocreatesuchweakenedclassi ers;i.e.learning set resampling (using the Bagging approach [5]), and random feature subset selection (using MFS , a Multiple Feature Subsets approach [3]). Other recent and similar techniques are not discussed here but are also based on modi cations to the training and/or the feature set [7,8,12,21]. 424 pp. Englisch. Nº de ref. del artículo: 9783540677048
Cantidad disponible: 2 disponibles
Librería: GreatBookPricesUK, Woodford Green, Reino Unido
Condición: New. Nº de ref. del artículo: 919229-n
Cantidad disponible: Más de 20 disponibles