Publicado por Cambridge University Press, 2020
ISBN 10: 1108486827 ISBN 13: 9781108486828
Idioma: Inglés
Librería: Goodwill Books, Hillsboro, OR, Estados Unidos de America
EUR 28,89
Cantidad disponible: 1 disponibles
Añadir al carritoCondición: good. Signs of wear and consistent use.
Publicado por Cambridge University Press, 2020
ISBN 10: 1108486827 ISBN 13: 9781108486828
Idioma: Inglés
Librería: Lucky's Textbooks, Dallas, TX, Estados Unidos de America
EUR 51,61
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Publicado por Cambridge University Press, 2020
ISBN 10: 1108486827 ISBN 13: 9781108486828
Idioma: Inglés
Librería: California Books, Miami, FL, Estados Unidos de America
EUR 57,51
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Publicado por Cambridge University Press, 2020
ISBN 10: 1108486827 ISBN 13: 9781108486828
Idioma: Inglés
Librería: Ria Christie Collections, Uxbridge, Reino Unido
EUR 57,05
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. In.
Publicado por Cambridge University Press, GB, 2020
ISBN 10: 1108486827 ISBN 13: 9781108486828
Idioma: Inglés
Librería: Rarewaves USA, OSWEGO, IL, Estados Unidos de America
EUR 76,44
Cantidad disponible: Más de 20 disponibles
Añadir al carritoHardback. Condición: New. Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.
Publicado por Cambridge University Press CUP, 2020
ISBN 10: 1108486827 ISBN 13: 9781108486828
Idioma: Inglés
Librería: Books Puddle, New York, NY, Estados Unidos de America
EUR 83,69
Cantidad disponible: 4 disponibles
Añadir al carritoCondición: New.
Publicado por Cambridge University Press, GB, 2020
ISBN 10: 1108486827 ISBN 13: 9781108486828
Idioma: Inglés
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
EUR 92,35
Cantidad disponible: Más de 20 disponibles
Añadir al carritoHardback. Condición: New. Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.
Publicado por Cambridge University Press, GB, 2020
ISBN 10: 1108486827 ISBN 13: 9781108486828
Idioma: Inglés
Librería: Rarewaves USA United, OSWEGO, IL, Estados Unidos de America
EUR 78,35
Cantidad disponible: Más de 20 disponibles
Añadir al carritoHardback. Condición: New. Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.
Publicado por Cambridge University Press, GB, 2020
ISBN 10: 1108486827 ISBN 13: 9781108486828
Idioma: Inglés
Librería: Rarewaves.com UK, London, Reino Unido
EUR 86,74
Cantidad disponible: Más de 20 disponibles
Añadir al carritoHardback. Condición: New. Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.
Publicado por Cambridge University Press, 2020
ISBN 10: 1108486827 ISBN 13: 9781108486828
Idioma: Inglés
Librería: Majestic Books, Hounslow, Reino Unido
EUR 86,51
Cantidad disponible: 4 disponibles
Añadir al carritoCondición: New. Print on Demand.
Publicado por Cambridge University Press, 2020
ISBN 10: 1108486827 ISBN 13: 9781108486828
Idioma: Inglés
Librería: Biblios, Frankfurt am main, HESSE, Alemania
EUR 86,94
Cantidad disponible: 4 disponibles
Añadir al carritoCondición: New. PRINT ON DEMAND.
Librería: moluna, Greven, Alemania
EUR 62,28
Cantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Decision-making in the face of uncertainty is a challenge in machine learning, and the multi-armed bandit model is a common framework to address it. This comprehensive introduction is an excellent reference for established researchers and a resource for gra.