Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: ALLBOOKS1, Direk, SA, Australia
EUR 67,55
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carrito
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: PBShop.store UK, Fairford, GLOS, Reino Unido
EUR 64,26
Convertir monedaCantidad disponible: 4 disponibles
Añadir al carritoHRD. Condición: New. New Book. Shipped from UK. Established seller since 2000.
Publicado por Cambridge University Press, Cambridge, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: San Francisco Book Company, Paris, Francia
EUR 60,00
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoHardcover. Condición: Very good. Hardcover Octavo. illustrated boards, 435 pp Standard shipping (no tracking or insurance) / Priority (with tracking) / Custom quote for large or heavy orders.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Ria Christie Collections, Uxbridge, Reino Unido
EUR 66,98
Convertir monedaCantidad disponible: 4 disponibles
Añadir al carritoCondición: New. In.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: California Books, Miami, FL, Estados Unidos de America
EUR 70,78
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
EUR 61,39
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoCondición: New.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Majestic Books, Hounslow, Reino Unido
EUR 68,75
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoCondición: New.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Books Puddle, New York, NY, Estados Unidos de America
EUR 69,82
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoCondición: New. New edition NO-PA16APR2015-KAP.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Kennys Bookshop and Art Galleries Ltd., Galway, GY, Irlanda
EUR 75,99
Convertir monedaCantidad disponible: 4 disponibles
Añadir al carritoCondición: New. 2022. New. Hardcover. . . . . .
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: GreatBookPricesUK, Woodford Green, Reino Unido
EUR 62,76
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoCondición: New.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: THE SAINT BOOKSTORE, Southport, Reino Unido
EUR 69,09
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoHardback. Condición: New. New copy - Usually dispatched within 4 working days. 1041.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Revaluation Books, Exeter, Reino Unido
EUR 69,43
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoHardcover. Condición: Brand New. 435 pages. 9.75x6.75x1.00 inches. In Stock.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Basi6 International, Irving, TX, Estados Unidos de America
EUR 61,26
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoCondición: Brand New. New. US edition. Expediting shipping for all USA and Europe orders excluding PO Box. Excellent Customer Service.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
EUR 71,66
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoCondición: As New. Unread book in perfect condition.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: GreatBookPricesUK, Woodford Green, Reino Unido
EUR 71,45
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoCondición: As New. Unread book in perfect condition.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Biblios, Frankfurt am main, HESSE, Alemania
EUR 73,47
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoCondición: New.
Publicado por Cambridge University Press Jun 2022, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: AHA-BUCH GmbH, Einbeck, Alemania
EUR 76,16
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoBuch. Condición: Neu. Neuware - 'A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning'--.
Publicado por Cambridge University Press, GB, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Rarewaves.com UK, London, Reino Unido
EUR 88,86
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoHardback. Condición: New. A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: moluna, Greven, Alemania
EUR 72,40
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoCondición: New. The book is written for newcomers to reinforcement learning who wish to write code for various applications, from robotics to power systems to supply chains. It also contains advanced material designed to prepare graduate students and professionals for both.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Kennys Bookstore, Olney, MD, Estados Unidos de America
EUR 94,13
Convertir monedaCantidad disponible: 4 disponibles
Añadir al carritoCondición: New. 2022. New. Hardcover. . . . . . Books ship from the US and Ireland.
Publicado por Cambridge University Press, GB, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
EUR 96,23
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoHardback. Condición: New. A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning.
Publicado por Cambridge University Press, Cambridge, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: CitiRetail, Stevenage, Reino Unido
EUR 71,57
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoHardcover. Condición: new. Hardcover. A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning. The book is written for newcomers to reinforcement learning who wish to write code for various applications, from robotics to power systems to supply chains. It also contains advanced material designed to prepare graduate students and professionals for both research and application of reinforcement learning and optimal control techniques. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Revaluation Books, Exeter, Reino Unido
EUR 98,96
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoHardcover. Condición: Brand New. 435 pages. 9.75x6.75x1.00 inches. In Stock.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Lucky's Textbooks, Dallas, TX, Estados Unidos de America
EUR 62,86
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Publicado por Cambridge University Press, Cambridge, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Grand Eagle Retail, Mason, OH, Estados Unidos de America
EUR 76,72
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoHardcover. Condición: new. Hardcover. A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning. The book is written for newcomers to reinforcement learning who wish to write code for various applications, from robotics to power systems to supply chains. It also contains advanced material designed to prepare graduate students and professionals for both research and application of reinforcement learning and optimal control techniques. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Publicado por Cambridge University Press, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: Best Price, Torrance, CA, Estados Unidos de America
EUR 57,56
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoCondición: New. SUPER FAST SHIPPING.
Publicado por Cambridge University Press, Cambridge, 2022
ISBN 10: 1316511960 ISBN 13: 9781316511961
Idioma: Inglés
Librería: AussieBookSeller, Truganina, VIC, Australia
EUR 123,81
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoHardcover. Condición: new. Hardcover. A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning. The book is written for newcomers to reinforcement learning who wish to write code for various applications, from robotics to power systems to supply chains. It also contains advanced material designed to prepare graduate students and professionals for both research and application of reinforcement learning and optimal control techniques. Shipping may be from our Sydney, NSW warehouse or from our UK or US warehouse, depending on stock availability.