Idioma: Inglés
Publicado por LAP LAMBERT Academic Publishing, 2011
ISBN 10: 384654146X ISBN 13: 9783846541463
Librería: preigu, Osnabrück, Alemania
EUR 51,00
Cantidad disponible: 5 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. Efficient Kernel Methods For Large Scale Classification | Scalable methods for training Support Vector Machines | Asharaf S | Taschenbuch | 132 S. | Englisch | 2011 | LAP LAMBERT Academic Publishing | EAN 9783846541463 | Verantwortliche Person für die EU: BoD - Books on Demand, In de Tarpen 42, 22848 Norderstedt, info[at]bod[dot]de | Anbieter: preigu.
Idioma: Inglés
Publicado por Lap Lambert Academic Publishing, 2011
ISBN 10: 384654146X ISBN 13: 9783846541463
Librería: Revaluation Books, Exeter, Reino Unido
EUR 121,04
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: Brand New. 132 pages. 8.50x0.39x5.91 inches. In Stock.
Idioma: Inglés
Publicado por LAP LAMBERT Academic Publishing Okt 2011, 2011
ISBN 10: 384654146X ISBN 13: 9783846541463
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
EUR 59,00
Cantidad disponible: 2 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Classification algorithms have been widely used in many application domains. Most of these domains deal with massive collection of data and hence demand classification algorithms that scale well with the size of the data sets involved. A classification algorithm is said to be scalable if there is no significant increase in time and space requirements for the algorithm (without compromising the generalization performance) when dealing with an increase in the training set size. Support Vector Machine (SVM) is one of the most celebrated kernel based classification methods used in Machine Learning. An SVM capable of handling large scale classification problems will definitely be an ideal candidate in many real world applications. The training process involved in SVM classifier is usually formulated as a Quadratic Programing (QP) problem. The existing solution strategies for this problem have an associated time and space complexity that is (at least) quadratic in the number of training points. It makes SVM training very expensive. This thesis addresses the scalability of the training algorithms involved in SVM to make it feasible with large training data sets. 132 pp. Englisch.
Idioma: Inglés
Publicado por LAP LAMBERT Academic Publishing, 2011
ISBN 10: 384654146X ISBN 13: 9783846541463
Librería: moluna, Greven, Alemania
EUR 48,50
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Autor/Autorin: S AsharafAsharaf S is a faculty member in the IT&Systems area in IIM, Kozhikode, India. He received his PhD and Master of Engineering degrees from Indian Institute of Science, Bangalore. Prior to joining IIMK, he has been with Americ.
Idioma: Inglés
Publicado por LAP LAMBERT Academic Publishing Okt 2011, 2011
ISBN 10: 384654146X ISBN 13: 9783846541463
Librería: buchversandmimpf2000, Emtmannsberg, BAYE, Alemania
EUR 59,00
Cantidad disponible: 1 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. This item is printed on demand - Print on Demand Titel. Neuware -Classification algorithms have been widely used in many application domains. Most of these domains deal with massive collection of data and hence demand classification algorithms that scale well with the size of the data sets involved. A classification algorithm is said to be scalable if there is no significant increase in time and space requirements for the algorithm (without compromising the generalization performance) when dealing with an increase in the training set size. Support Vector Machine (SVM) is one of the most celebrated kernel based classification methods used in Machine Learning. An SVM capable of handling large scale classification problems will definitely be an ideal candidate in many real world applications. The training process involved in SVM classifier is usually formulated as a Quadratic Programing (QP) problem. The existing solution strategies for this problem have an associated time and space complexity that is (at least) quadratic in the number of training points. It makes SVM training very expensive. This thesis addresses the scalability of the training algorithms involved in SVM to make it feasible with large training data sets.VDM Verlag, Dudweiler Landstraße 99, 66123 Saarbrücken 132 pp. Englisch.
Idioma: Inglés
Publicado por LAP LAMBERT Academic Publishing, 2011
ISBN 10: 384654146X ISBN 13: 9783846541463
Librería: AHA-BUCH GmbH, Einbeck, Alemania
EUR 59,00
Cantidad disponible: 1 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - Classification algorithms have been widely used in many application domains. Most of these domains deal with massive collection of data and hence demand classification algorithms that scale well with the size of the data sets involved. A classification algorithm is said to be scalable if there is no significant increase in time and space requirements for the algorithm (without compromising the generalization performance) when dealing with an increase in the training set size. Support Vector Machine (SVM) is one of the most celebrated kernel based classification methods used in Machine Learning. An SVM capable of handling large scale classification problems will definitely be an ideal candidate in many real world applications. The training process involved in SVM classifier is usually formulated as a Quadratic Programing (QP) problem. The existing solution strategies for this problem have an associated time and space complexity that is (at least) quadratic in the number of training points. It makes SVM training very expensive. This thesis addresses the scalability of the training algorithms involved in SVM to make it feasible with large training data sets.