EUR 294,19
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoGebunden. Condición: New.
Librería: Ria Christie Collections, Uxbridge, Reino Unido
EUR 350,47
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. In.
Librería: Ria Christie Collections, Uxbridge, Reino Unido
EUR 350,47
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. In.
EUR 338,10
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Publicado por Springer US, Springer US, 2001
ISBN 10: 0792374592 ISBN 13: 9780792374596
Idioma: Inglés
Librería: AHA-BUCH GmbH, Einbeck, Alemania
EUR 366,41
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoBuch. Condición: Neu. Druck auf Anfrage Neuware - Printed after ordering - Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems. The basic object is a discrete-time stochas tic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. The goal is to select a 'good' control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation.
Publicado por Springer US, Springer US Sep 2001, 2001
ISBN 10: 0792374592 ISBN 13: 9780792374596
Idioma: Inglés
Librería: buchversandmimpf2000, Emtmannsberg, BAYE, Alemania
EUR 353,09
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoBuch. Condición: Neu. Neuware -Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems. The basic object is a discrete-time stochas tic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. The goal is to select a 'good' control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 578 pp. Englisch.
Publicado por Springer US, Springer New York, 2012
ISBN 10: 1461352487 ISBN 13: 9781461352488
Idioma: Inglés
Librería: AHA-BUCH GmbH, Einbeck, Alemania
EUR 418,87
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. Druck auf Anfrage Neuware - Printed after ordering - Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems. The basic object is a discrete-time stochas tic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. The goal is to select a 'good' control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation.
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
EUR 277,13
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems. The basic object is a discrete-time stochas tic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. The goal is to select a 'good' control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation. 576 pp. Englisch.
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
EUR 353,09
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoBuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems. The basic object is a discrete-time stochas tic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. The goal is to select a 'good' control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation. 578 pp. Englisch.
Publicado por Springer US, Springer New York Okt 2012, 2012
ISBN 10: 1461352487 ISBN 13: 9781461352488
Idioma: Inglés
Librería: buchversandmimpf2000, Emtmannsberg, BAYE, Alemania
EUR 406,59
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. This item is printed on demand - Print on Demand Titel. Neuware -Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems. The basic object is a discrete-time stochas tic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. The goal is to select a 'good' control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 576 pp. Englisch.