Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.
"Sinopsis" puede pertenecer a otra edición de este libro.
Tor Lattimore is a research scientist at DeepMind. His research is focused on decision making in the face of uncertainty, including bandit algorithms and reinforcement learning. Before joining DeepMind he was an assistant professor at Indiana University and a postdoctoral fellow at the University of Alberta.
Csaba Szepesvári is a Professor in the Department of Computing Science at the University of Alberta and a Principal Investigator of the Alberta Machine Intelligence Institute. He also leads the 'Foundations' team at DeepMind. He has co-authored a book on nonlinear approximate adaptive controllers and authored a book on reinforcement learning, in addition to publishing over 200 journal and conference papers. He is an action editor of the Journal of Machine Learning Research.
"Sobre este título" puede pertenecer a otra edición de este libro.
Librería: Goodwill Books, Hillsboro, OR, Estados Unidos de America
Condición: good. Signs of wear and consistent use. Nº de ref. del artículo: 3IIT7G0063MN_ns
Cantidad disponible: 1 disponibles
Librería: BooksRun, Philadelphia, PA, Estados Unidos de America
Hardcover. Condición: Good. 1. It's a preowned item in good condition and includes all the pages. It may have some general signs of wear and tear, such as markings, highlighting, slight damage to the cover, minimal wear to the binding, etc., but they will not affect the overall reading experience. Nº de ref. del artículo: 1108486827-11-1
Cantidad disponible: 1 disponibles
Librería: Books From California, Simi Valley, CA, Estados Unidos de America
hardcover. Condición: Very Good. Cover and edges may have some wear. Nº de ref. del artículo: mon0003875466
Cantidad disponible: 1 disponibles
Librería: Best Price, Torrance, CA, Estados Unidos de America
Condición: New. SUPER FAST SHIPPING. Nº de ref. del artículo: 9781108486828
Cantidad disponible: 4 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: New. Nº de ref. del artículo: 40407250-n
Cantidad disponible: Más de 20 disponibles
Librería: Lucky's Textbooks, Dallas, TX, Estados Unidos de America
Condición: New. Nº de ref. del artículo: ABLIING23Mar2317530285308
Cantidad disponible: Más de 20 disponibles
Librería: BargainBookStores, Grand Rapids, MI, Estados Unidos de America
Hardback or Cased Book. Condición: New. Bandit Algorithms. Book. Nº de ref. del artículo: BBS-9781108486828
Cantidad disponible: 5 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: As New. Unread book in perfect condition. Nº de ref. del artículo: 40407250
Cantidad disponible: Más de 20 disponibles
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
Hardcover. Condición: new. Hardcover. Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes. Decision-making in the face of uncertainty is a challenge in machine learning, and the multi-armed bandit model is a common framework to address it. This comprehensive introduction is an excellent reference for established researchers and a resource for graduate students interested in exploring stochastic, adversarial and Bayesian frameworks. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Nº de ref. del artículo: 9781108486828
Cantidad disponible: 1 disponibles
Librería: Ria Christie Collections, Uxbridge, Reino Unido
Condición: New. In. Nº de ref. del artículo: ria9781108486828_new
Cantidad disponible: Más de 20 disponibles