Librería: Goodwill of Silicon Valley, SAN JOSE, CA, Estados Unidos de America
EUR 31,89
Cantidad disponible: 1 disponibles
Añadir al carritoCondición: good. Supports Goodwill of Silicon Valley job training programs. The cover and pages are in Good condition! Any other included accessories are also in Good condition showing use. Use can include some highlighting and writing, page and cover creases as well as other types visible wear.
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
EUR 43,29
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Librería: Lucky's Textbooks, Dallas, TX, Estados Unidos de America
EUR 45,13
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
EUR 50,85
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: As New. Unread book in perfect condition.
Librería: BargainBookStores, Grand Rapids, MI, Estados Unidos de America
EUR 67,10
Cantidad disponible: 5 disponibles
Añadir al carritoPaperback or Softback. Condición: New. Modern Data Mining Algorithms in C++ and Cuda C: Recent Developments in Feature Extraction and Selection Algorithms for Data Science. Book.
Librería: Kennys Bookshop and Art Galleries Ltd., Galway, GY, Irlanda
Original o primera edición
EUR 60,40
Cantidad disponible: 15 disponibles
Añadir al carritoCondición: New. 2020. 1st ed. paperback. . . . . .
Librería: GreatBookPricesUK, Woodford Green, Reino Unido
EUR 57,05
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: As New. Unread book in perfect condition.
Librería: Ria Christie Collections, Uxbridge, Reino Unido
EUR 63,07
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. In English.
Librería: Chiron Media, Wallingford, Reino Unido
EUR 59,52
Cantidad disponible: 10 disponibles
Añadir al carritoPF. Condición: New.
Librería: Revaluation Books, Exeter, Reino Unido
EUR 67,20
Cantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: Brand New. 237 pages. 10.00x7.00x0.50 inches. In Stock.
Librería: Lucky's Textbooks, Dallas, TX, Estados Unidos de America
EUR 75,68
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Librería: GreatBookPricesUK, Woodford Green, Reino Unido
EUR 62,42
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Librería: Ria Christie Collections, Uxbridge, Reino Unido
EUR 68,97
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. In English.
Librería: Kennys Bookstore, Olney, MD, Estados Unidos de America
EUR 74,36
Cantidad disponible: 15 disponibles
Añadir al carritoCondición: New. 2020. 1st ed. paperback. . . . . . Books ship from the US and Ireland.
Librería: Chiron Media, Wallingford, Reino Unido
EUR 67,96
Cantidad disponible: 10 disponibles
Añadir al carritoPaperback. Condición: New.
Librería: Books Puddle, New York, NY, Estados Unidos de America
EUR 93,72
Cantidad disponible: 4 disponibles
Añadir al carritoCondición: New.
Librería: Kennys Bookshop and Art Galleries Ltd., Galway, GY, Irlanda
Original o primera edición
EUR 97,53
Cantidad disponible: 15 disponibles
Añadir al carritoCondición: New. 2017. 1st ed. Paperback. . . . . .
Librería: Revaluation Books, Exeter, Reino Unido
EUR 113,02
Cantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: Brand New. 286 pages. 10.00x7.00x1.00 inches. In Stock.
Librería: Kennys Bookstore, Olney, MD, Estados Unidos de America
EUR 120,82
Cantidad disponible: 15 disponibles
Añadir al carritoCondición: New. 2017. 1st ed. Paperback. . . . . . Books ship from the US and Ireland.
Librería: preigu, Osnabrück, Alemania
EUR 58,60
Cantidad disponible: 5 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. Modern Data Mining Algorithms in C++ and CUDA C | Recent Developments in Feature Extraction and Selection Algorithms for Data Science | Timothy Masters | Taschenbuch | ix | Englisch | 2020 | Apress | EAN 9781484259870 | Verantwortliche Person für die EU: APress in Springer Science + Business Media, Heidelberger Platz 3, 14197 Berlin, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu.
Publicado por Apress, Apress Jun 2020, 2020
ISBN 10: 1484259874 ISBN 13: 9781484259870
Idioma: Inglés
Librería: buchversandmimpf2000, Emtmannsberg, BAYE, Alemania
EUR 69,54
Cantidad disponible: 2 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. Neuware -Discover a variety of data-mining algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables.As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. Yoüll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are:Forward selection component analysisLocal feature selectionLinking features and a target with a hidden Markov modelImprovements on traditional stepwise selectionNominal-to-ordinal conversionAll algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code.The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it.What You Will LearnCombine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set.Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods.Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets.Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input.Who This Book Is ForIntermediate to advanced data science programmers and analysts.APress in Springer Science + Business Media, Heidelberger Platz 3, 14197 Berlin 240 pp. Englisch.
Librería: preigu, Osnabrück, Alemania
EUR 71,30
Cantidad disponible: 5 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. Data Mining Algorithms in C++ | Data Patterns and Algorithms for Modern Applications | Timothy Masters | Taschenbuch | xiv | Englisch | 2017 | Apress | EAN 9781484233146 | Verantwortliche Person für die EU: APress in Springer Science + Business Media, Heidelberger Platz 3, 14197 Berlin, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu.
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
EUR 69,54
Cantidad disponible: 2 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Discover a variety of data-mining algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables. As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. You'll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are:Forward selection component analysis Local feature selectionLinking features and a target with a hidden Markov modelImprovements on traditional stepwise selectionNominal-to-ordinal conversion All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code.The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it.What You Will Learn Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set. Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods. Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets.Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input. Who This Book Is ForIntermediate to advanced data science programmers and analysts. 240 pp. Englisch.
Librería: THE SAINT BOOKSTORE, Southport, Reino Unido
EUR 82,41
Cantidad disponible: Más de 20 disponibles
Añadir al carritoPaperback / softback. Condición: New. This item is printed on demand. New copy - Usually dispatched within 5-9 working days 636.
Librería: Majestic Books, Hounslow, Reino Unido
EUR 98,29
Cantidad disponible: 4 disponibles
Añadir al carritoCondición: New. Print on Demand.
Librería: moluna, Greven, Alemania
EUR 56,35
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. A novel expert-driven data-mining approach to algorithms in C++ and CUDA C Author has been developing and using algorithms for over 20 yearsData mining is an important topic in big data and data science.
Librería: Biblios, Frankfurt am main, HESSE, Alemania
EUR 99,73
Cantidad disponible: 4 disponibles
Añadir al carritoCondición: New. PRINT ON DEMAND.
Librería: AHA-BUCH GmbH, Einbeck, Alemania
EUR 70,84
Cantidad disponible: 1 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - Discover a variety of data-mining algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables. As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. You'll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are:Forward selection component analysis Local feature selectionLinking features and a target with a hidden Markov modelImprovements on traditional stepwise selectionNominal-to-ordinal conversion All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code.The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it.What You Will Learn Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set. Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods. Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets.Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input. Who This Book Is ForIntermediate to advanced data science programmers and analysts.
Librería: AHA-BUCH GmbH, Einbeck, Alemania
EUR 85,05
Cantidad disponible: 1 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - Discover hidden relationships among the variables in your data, and learn how to exploit these relationships. This book presents a collection of data-mining algorithms that are effective in a wide variety of prediction and classification applications. All algorithms include an intuitive explanation of operation, essential equations, references to more rigorous theory, and commented C++ source code. Many of these techniques are recent developments, still not in widespread use. Others are standard algorithms given a fresh look. In every case, the focus is on practical applicability, with all code written in such a way that it can easily be included into any program. The Windows-based DATAMINE program lets you experiment with the techniques before incorporating them into your own work. What You'll Learn Use Monte-Carlo permutation tests to provide statistically sound assessments of relationships present in your dataDiscover how combinatorially symmetric cross validation reveals whether your model has true power or has just learned noise by overfitting the dataWork with feature weighting as regularized energy-based learning to rank variables according to their predictive power when there is too little data for traditional methodsSee how the eigenstructure of a dataset enables clustering of variables into groups that exist only within meaningful subspaces of the dataPlot regions of the variable space where there is disagreement between marginal and actual densities, or where contribution to mutual information is high Who This Book Is For Anyone interested in discovering and exploiting relationships among variables. Although all code examples are written in C++, the algorithms are described in sufficient detail that they can easily be programmed in any language.