This textbook establishes a theoretical framework for understanding deep learning models of practical relevance. With an approach that borrows from theoretical physics, Roberts and Yaida provide clear and pedagogical explanations of how realistic deep neural networks actually work. To make results from the theoretical forefront accessible, the authors eschew the subject's traditional emphasis on intimidating formality without sacrificing accuracy. Straightforward and approachable, this volume balances detailed first-principle derivations of novel results with insight and intuition for theorists and practitioners alike. This self-contained textbook is ideal for students and researchers interested in artificial intelligence with minimal prerequisites of linear algebra, calculus, and informal probability theory, and it can easily fill a semester-long course on deep learning theory. For the first time, the exciting practical advances in modern artificial intelligence capabilities can be matched with a set of effective principles, providing a timeless blueprint for theoretical research in deep learning.
"Sinopsis" puede pertenecer a otra edición de este libro.
Daniel A. Roberts was cofounder and CTO of Diffeo, an AI company acquired by Salesforce; a research scientist at Facebook AI Research; and a member of the School of Natural Sciences at the Institute for Advanced Study in Princeton, NJ. He was a Hertz Fellow, earning a PhD from MIT in theoretical physics, and was also a Marshall Scholar at Cambridge and Oxford Universities.
Sho Yaida is a research scientist at Meta AI. Prior to joining Meta AI, he obtained his PhD in physics at Stanford University and held postdoctoral positions at MIT and at Duke University. At Meta AI, he uses tools from theoretical physics to understand neural networks, the topic of this book.
Boris Hanin is an Assistant Professor at Princeton University in the Operations Research and Financial Engineering Department. Prior to joining Princeton in 2020, Boris was an Assistant Professor at Texas A&M in the Math Department and an NSF postdoc at MIT. He has taught graduate courses on the theory and practice of deep learning at both Texas A&M and Princeton.
"Sobre este título" puede pertenecer a otra edición de este libro.
Librería: BooksRun, Philadelphia, PA, Estados Unidos de America
Hardcover. Condición: Very Good. New. It's a well-cared-for item that has seen limited use. The item may show minor signs of wear. All the text is legible, with all pages included. It may have slight markings and/or highlighting. Nº de ref. del artículo: 1316519333-8-1
Cantidad disponible: 1 disponibles
Librería: medimops, Berlin, Alemania
Condición: as new. Wie neu/Like new. Nº de ref. del artículo: M01316519333-N
Cantidad disponible: 1 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: New. Nº de ref. del artículo: 44117012-n
Cantidad disponible: Más de 20 disponibles
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
Hardcover. Condición: new. Hardcover. This textbook establishes a theoretical framework for understanding deep learning models of practical relevance. With an approach that borrows from theoretical physics, Roberts and Yaida provide clear and pedagogical explanations of how realistic deep neural networks actually work. To make results from the theoretical forefront accessible, the authors eschew the subject's traditional emphasis on intimidating formality without sacrificing accuracy. Straightforward and approachable, this volume balances detailed first-principle derivations of novel results with insight and intuition for theorists and practitioners alike. This self-contained textbook is ideal for students and researchers interested in artificial intelligence with minimal prerequisites of linear algebra, calculus, and informal probability theory, and it can easily fill a semester-long course on deep learning theory. For the first time, the exciting practical advances in modern artificial intelligence capabilities can be matched with a set of effective principles, providing a timeless blueprint for theoretical research in deep learning. This is the first book focused entirely on deep learning theory. Tools from theoretical physics are borrowed and adapted to explain, from first principles, how realistic deep neural networks work, benefiting practitioners looking to build better AI models and theorists looking for a unifying framework for understanding intelligence. This item is printed on demand. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Nº de ref. del artículo: 9781316519332
Cantidad disponible: 1 disponibles
Librería: California Books, Miami, FL, Estados Unidos de America
Condición: New. Nº de ref. del artículo: I-9781316519332
Cantidad disponible: Más de 20 disponibles
Librería: Speedyhen LLC, Hialeah, FL, Estados Unidos de America
Condición: NEW. Nº de ref. del artículo: NWUS9781316519332
Cantidad disponible: 6 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: As New. Unread book in perfect condition. Nº de ref. del artículo: 44117012
Cantidad disponible: Más de 20 disponibles
Librería: Ria Christie Collections, Uxbridge, Reino Unido
Condición: New. In. Nº de ref. del artículo: ria9781316519332_new
Cantidad disponible: Más de 20 disponibles
Librería: Chiron Media, Wallingford, Reino Unido
hardcover. Condición: New. Nº de ref. del artículo: 6666-GRD-9781316519332
Cantidad disponible: 2 disponibles
Librería: GreatBookPricesUK, Woodford Green, Reino Unido
Condición: New. Nº de ref. del artículo: 44117012-n
Cantidad disponible: Más de 20 disponibles