Librería: AwesomeBooks, Wallingford, Reino Unido
EUR 10,11
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: Very Good. Neural Networks for Conditional Probability Estimation: Forecasting Beyond Point Predictions (Perspectives in Neural Computing) This book is in very good condition and will be shipped within 24 hours of ordering. The cover may have some limited signs of wear but the pages are clean, intact and the spine remains undamaged. This book has clearly been well maintained and looked after thus far. Money back guarantee if you are not satisfied. See all our books here, order more than 1 book and get discounted shipping. .
Librería: Phatpocket Limited, Waltham Abbey, HERTS, Reino Unido
EUR 4,48
Cantidad disponible: 1 disponibles
Añadir al carritoCondición: Good. Your purchase helps support Sri Lankan Children's Charity 'The Rainbow Centre'. Ex-library, so some stamps and wear, but in good overall condition. Our donations to The Rainbow Centre have helped provide an education and a safe haven to hundreds of children who live in appalling conditions.
Librería: Bahamut Media, Reading, Reino Unido
EUR 10,11
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: Very Good. Shipped within 24 hours from our UK warehouse. Clean, undamaged book with no damage to pages and minimal wear to the cover. Spine still tight, in very good condition. Remember if you are not happy, you are covered by our 100% money back guarantee.
Librería: Solr Books, Lincolnwood, IL, Estados Unidos de America
EUR 17,52
Cantidad disponible: 1 disponibles
Añadir al carritoCondición: very_good. This books is in Very good condition. There may be a few flaws like shelf wear and some light wear.
Librería: California Books, Miami, FL, Estados Unidos de America
EUR 64,89
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Idioma: Inglés
Publicado por London ; Berlin ; Tokyo ; Heidelberg ; New York ; Barcelona ; Hong Kong ; Milan ; Paris ; Santa Clara ; Singapore : Springer, 1999
ISBN 10: 1852330953 ISBN 13: 9781852330958
Librería: Roland Antiquariat UG haftungsbeschränkt, Weinheim, Alemania
EUR 56,00
Cantidad disponible: 1 disponibles
Añadir al carritoSoftcover. XXIII, 275 S. : graph. Darst. ; 24 cm Like new. Unread book. --- Neuwertiger Zustand. Ungelesenes Buch. 9781852330958 Sprache: Deutsch Gewicht in Gramm: 467 Softcover reprint of the original 1st ed. 1999.
Librería: Chiron Media, Wallingford, Reino Unido
EUR 55,99
Cantidad disponible: 10 disponibles
Añadir al carritoPaperback. Condición: New.
Librería: Books Puddle, New York, NY, Estados Unidos de America
EUR 74,16
Cantidad disponible: 4 disponibles
Añadir al carritoCondición: New. pp. 302.
Librería: Revaluation Books, Exeter, Reino Unido
EUR 77,87
Cantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: Brand New. 275 pages. 9.50x6.25x0.75 inches. In Stock.
Librería: AHA-BUCH GmbH, Einbeck, Alemania
EUR 59,97
Cantidad disponible: 1 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. Druck auf Anfrage Neuware - Printed after ordering - Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the 'targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaus sian or at least unimodal, this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter 1 demonstrates that even for a deterministic system and 'be nign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter 2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter 3, and the performance ofthis method is demonstrated on three stochastic time series in chapters 4 and 5.
Idioma: Inglés
Publicado por Springer, Springer Feb 1999, 1999
ISBN 10: 1852330953 ISBN 13: 9781852330958
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
EUR 53,49
Cantidad disponible: 2 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the 'targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaus sian or at least unimodal, this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter 1 demonstrates that even for a deterministic system and 'be nign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter 2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter 3, and the performance ofthis method is demonstrated on three stochastic time series in chapters 4 and 5. 300 pp. Englisch.
Librería: Majestic Books, Hounslow, Reino Unido
EUR 76,74
Cantidad disponible: 4 disponibles
Añadir al carritoCondición: New. Print on Demand pp. 302 49:B&W 6.14 x 9.21 in or 234 x 156 mm (Royal 8vo) Perfect Bound on White w/Gloss Lam.
Librería: THE SAINT BOOKSTORE, Southport, Reino Unido
EUR 67,24
Cantidad disponible: Más de 20 disponibles
Añadir al carritoPaperback / softback. Condición: New. This item is printed on demand. New copy - Usually dispatched within 5-9 working days.
Librería: Biblios, Frankfurt am main, HESSE, Alemania
EUR 78,77
Cantidad disponible: 4 disponibles
Añadir al carritoCondición: New. PRINT ON DEMAND pp. 302.
Librería: moluna, Greven, Alemania
EUR 48,37
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Provides unique, comprehensive coverage of generalisation and regularisation: Provides the first real-world test results for recent theoretical findings on the generalisation performance of committeesConventional applications of neural networks usually .
Idioma: Inglés
Publicado por Springer London, Springer London Feb 1999, 1999
ISBN 10: 1852330953 ISBN 13: 9781852330958
Librería: buchversandmimpf2000, Emtmannsberg, BAYE, Alemania
EUR 53,49
Cantidad disponible: 1 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. This item is printed on demand - Print on Demand Titel. Neuware -Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the 'targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaus sian or at least unimodal, this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter 1 demonstrates that even for a deterministic system and 'be nign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter 2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter 3, and the performance ofthis method is demonstrated on three stochastic time series in chapters 4 and 5.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 300 pp. Englisch.