Publicado por Mercury Learning and Information, 2024
ISBN 10: 1501523562 ISBN 13: 9781501523564
Idioma: Inglés
Librería: Books From California, Simi Valley, CA, Estados Unidos de America
EUR 41,37
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritopaperback. Condición: Very Good. Clean, unmarked copy.
Publicado por Mercury Learning and Information, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Idioma: Inglés
Librería: California Books, Miami, FL, Estados Unidos de America
EUR 47,77
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Publicado por Mercury Learning and Information 1/1/2025, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Idioma: Inglés
Librería: BargainBookStores, Grand Rapids, MI, Estados Unidos de America
EUR 46,37
Convertir monedaCantidad disponible: 5 disponibles
Añadir al carritoPaperback or Softback. Condición: New. Large Language Models for Developers: A Prompt-Based Exploration of Llms 3.7. Book.
Publicado por Mercury Learning and Information, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Idioma: Inglés
Librería: Ria Christie Collections, Uxbridge, Reino Unido
EUR 52,51
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. In.
Publicado por Mercury Learning and Information, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Idioma: Inglés
Librería: Kennys Bookshop and Art Galleries Ltd., Galway, GY, Irlanda
EUR 63,17
Convertir monedaCantidad disponible: 15 disponibles
Añadir al carritoCondición: New. 2025. Paperback. . . . . .
EUR 64,90
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoPaperback. Condición: New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.
Librería: Rarewaves USA United, OSWEGO, IL, Estados Unidos de America
EUR 66,33
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoPaperback. Condición: New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.
Publicado por Mercury Learning and Information, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Idioma: Inglés
Librería: Kennys Bookstore, Olney, MD, Estados Unidos de America
EUR 77,95
Convertir monedaCantidad disponible: 15 disponibles
Añadir al carritoCondición: New. 2025. Paperback. . . . . . Books ship from the US and Ireland.
Librería: AussieBookSeller, Truganina, VIC, Australia
EUR 50,01
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architectures attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES Covers the full lifecycle of working with LLMs, from model selection to deployment Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization Teaches readers to enhance model efficiency with advanced optimization techniques Includes companion files with code and images -- available from the publisher This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engi Shipping may be from our Sydney, NSW warehouse or from our UK or US warehouse, depending on stock availability.
EUR 85,01
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoPaperback. Condición: New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.
Publicado por Mercury Learning & Information, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Idioma: Inglés
Librería: Revaluation Books, Exeter, Reino Unido
EUR 77,06
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: Brand New. 1012 pages. 6.00x1.90x9.00 inches. In Stock.
EUR 56,65
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architectures attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES Covers the full lifecycle of working with LLMs, from model selection to deployment Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization Teaches readers to enhance model efficiency with advanced optimization techniques Includes companion files with code and images -- available from the publisher This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engi Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability.
EUR 92,13
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoPaperback. Condición: New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.
Publicado por Mercury Learning And Information, De Gruyter Jan 2025, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Idioma: Inglés
Librería: buchversandmimpf2000, Emtmannsberg, BAYE, Alemania
EUR 58,95
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. Neuware -This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engiWalter de Gruyter GmbH, Genthiner Strasse 13, 10785 Berlin 1046 pp. Englisch.
Librería: Grand Eagle Retail, Mason, OH, Estados Unidos de America
EUR 46,31
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architectures attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES Covers the full lifecycle of working with LLMs, from model selection to deployment Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization Teaches readers to enhance model efficiency with advanced optimization techniques Includes companion files with code and images -- available from the publisher This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engi Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
EUR 58,95
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware 1046 pp. Englisch.
Librería: moluna, Greven, Alemania
EUR 51,60
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Oswald Campesato (San Francisco, CA) specializes in Deep Learning, Python, Data Science, and Generative AI. He is the author/co-author of over forty-five books including Google Gemini for Python, Large Language Models, and GPT-4 for Developers (all Mercury .
Publicado por Mercury Learning And Information, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Idioma: Inglés
Librería: AHA-BUCH GmbH, Einbeck, Alemania
EUR 65,89
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES¿ Covers the full lifecycle of working with LLMs, from model selection to deployment¿ Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization¿ Teaches readers to enhance model efficiency with advanced optimization techniques¿ Includes companion files with code and images -- available from the publisher.