This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
"Sinopsis" puede pertenecer a otra edición de este libro.
EUR 4,65 gastos de envío desde Reino Unido a España
Destinos, gastos y plazos de envíoLibrería: Ria Christie Collections, Uxbridge, Reino Unido
Condición: New. In. Nº de ref. del artículo: ria9783031190681_new
Cantidad disponible: Más de 20 disponibles
Librería: PBShop.store US, Wood Dale, IL, Estados Unidos de America
PAP. Condición: New. New Book. Shipped from UK. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. Nº de ref. del artículo: L0-9783031190681
Cantidad disponible: Más de 20 disponibles
Librería: PBShop.store UK, Fairford, GLOS, Reino Unido
PAP. Condición: New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. Nº de ref. del artículo: L0-9783031190681
Cantidad disponible: Más de 20 disponibles
Librería: Chiron Media, Wallingford, Reino Unido
PF. Condición: New. Nº de ref. del artículo: 6666-IUK-9783031190681
Cantidad disponible: 10 disponibles