This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture’s attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.
FEATURES
• Covers the full lifecycle of working with LLMs, from model selection to deployment
• Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization
• Teaches readers to enhance model efficiency with advanced optimization techniques
• Includes companion files with code and images -- available from the publisher
"Sinopsis" puede pertenecer a otra edición de este libro.
Oswald Campesato (San Francisco, CA) specializes in Deep Learning, Python, Data Science, and Generative AI. He is the author/co-author of over forty-five books including Google Gemini for Python, Large Language Models, and GPT-4 for Developers (all Mercury Learning).
"Sobre este título" puede pertenecer a otra edición de este libro.
EUR 12,54 gastos de envío desde Estados Unidos de America a España
Destinos, gastos y plazos de envíoEUR 6,92 gastos de envío desde Estados Unidos de America a España
Destinos, gastos y plazos de envíoLibrería: Books From California, Simi Valley, CA, Estados Unidos de America
paperback. Condición: Very Good. Nº de ref. del artículo: mon0003836345
Cantidad disponible: 1 disponibles
Librería: California Books, Miami, FL, Estados Unidos de America
Condición: New. Nº de ref. del artículo: I-9781501523564
Cantidad disponible: Más de 20 disponibles
Librería: Ria Christie Collections, Uxbridge, Reino Unido
Condición: New. In. Nº de ref. del artículo: ria9781501523564_new
Cantidad disponible: Más de 20 disponibles
Librería: PBShop.store UK, Fairford, GLOS, Reino Unido
PAP. Condición: New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. Nº de ref. del artículo: L0-9781501523564
Cantidad disponible: Más de 20 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: New. Nº de ref. del artículo: 49799415-n
Cantidad disponible: 15 disponibles
Librería: PBShop.store US, Wood Dale, IL, Estados Unidos de America
PAP. Condición: New. New Book. Shipped from UK. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. Nº de ref. del artículo: L0-9781501523564
Cantidad disponible: Más de 20 disponibles
Librería: Kennys Bookshop and Art Galleries Ltd., Galway, GY, Irlanda
Condición: New. 2025. Paperback. . . . . . Nº de ref. del artículo: V9781501523564
Cantidad disponible: Más de 20 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: As New. Unread book in perfect condition. Nº de ref. del artículo: 49799415
Cantidad disponible: 15 disponibles
Librería: GreatBookPricesUK, Woodford Green, Reino Unido
Condición: New. Nº de ref. del artículo: 49799415-n
Cantidad disponible: Más de 20 disponibles
Librería: Rarewaves USA, OSWEGO, IL, Estados Unidos de America
Paperback. Condición: New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher. Nº de ref. del artículo: LU-9781501523564
Cantidad disponible: Más de 20 disponibles