Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduce the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques – especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms.
Key features of this revised and improved Second Edition include:
· Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search and meta-heuristics (simulated annealing, tabu search, and genetic algorithms)
· Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming(value and policy iteration) for discounted, average, and total reward performance metrics
· An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata
· A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online) and convergence proofs, via Banach fixed point theory and Ordinary Differential Equations
Themed around three areas in separate sets of chapters – Static Simulation Optimization, Reinforcement Learning and Convergence Analysis – this book is written for researchers and students in the fields of engineering (industrial, systems,electrical and computer), operations research, computer science and applied mathematics.
"Sinopsis" puede pertenecer a otra edición de este libro.
Abhijit Gosavi is a leading international authority on reinforcement learning, stochastic dynamic programming and simulation-based optimization. The first edition of his Springer book “Simulation-Based Optimization” that appeared in 2003 was the first text to have appeared on that topic. He is regularly an invited speaker at major national and international conferences on operations research, reinforcement learning, adaptive/approximate dynamic programming, and systems engineering.
He has published more than fifty journal and conference articles – many of which have appeared in leading scholarly journals such as Management Science, Automatica, INFORMS Journal on Computing, Machine Learning, Journal of Retailing, Systems and Control Letters and the European Journal of Operational Research. He has also authored numerous book chapters on simulation-based optimization and operations research. His research has been funded by the National Science Foundation, Department of Defense, Missouri Department of Transportation, University of Missouri Research Board and industry. He has consulted extensively for the U.S. Department of Veterans Affairs and the mass media as a statistical/simulation analyst. He has received teaching awards from the Institute of Industrial Engineers.
He currently serves as an Associate Professor of Engineering Management and Systems Engineering at Missouri University of Science and Technology in Rolla, MO. He holds a masters degree in Mechanical Engineering from the Indian Institute of Technology and a Ph.D. in Industrial Engineering from the University of South Florida. He is a member of INFORMS, IIE and ASEE.
Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques – especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms.
Key features of this revised and improved Second Edition include:
· Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search, and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search, and meta-heuristics (simulated annealing, tabu search, and genetic algorithms)
· Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming (value and policy iteration) for discounted, average, and total reward performance metrics
· An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata
· A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online), and convergence proofs, via Banach fixed point theory and Ordinary Differential Equations
Themed around three areas in separate sets of chapters – Static Simulation Optimization, Reinforcement Learning, and Convergence Analysis – this book is written for researchers and students in the fields of engineering (industrial, systems, electrical, and computer), operations research, computer science, and applied mathematics.
"Sobre este título" puede pertenecer a otra edición de este libro.
Librería: Moe's Books, Berkeley, CA, Estados Unidos de America
Soft cover. Condición: Very good. No jacket. Second edition. Spine is cocked. Cover edges are slightly worn. Fore and bottom edges are stained, but readability is not impacted. Back binding glue is exposed. Inside is unmarked. Nº de ref. del artículo: 1147483
Cantidad disponible: 1 disponibles
Librería: Ria Christie Collections, Uxbridge, Reino Unido
Condición: New. In. Nº de ref. del artículo: ria9781489977311_new
Cantidad disponible: Más de 20 disponibles
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
Taschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduce the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques - especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms.Key features of this revised and improved Second Edition include: Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search and meta-heuristics (simulated annealing, tabu search, and genetic algorithms) Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming(value and policy iteration) for discounted, average, and total reward performance metrics An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online) and convergence proofs, via Banach fixed point theory and Ordinary Differential EquationsThemed around three areas in separate sets of chapters - Static Simulation Optimization, Reinforcement Learning and Convergence Analysis - this book is written for researchers and students in the fields of engineering (industrial, systems,electrical and computer), operations research, computer science and applied mathematics. 536 pp. Englisch. Nº de ref. del artículo: 9781489977311
Cantidad disponible: 2 disponibles
Librería: Brook Bookstore On Demand, Napoli, NA, Italia
Condición: new. Questo è un articolo print on demand. Nº de ref. del artículo: 4d0e141e0491c130321624b4a70aedb9
Cantidad disponible: Más de 20 disponibles
Librería: moluna, Greven, Alemania
Condición: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Brings the field completely up to dateAll computer code brought up to dateNew material not covered in first edition includes nested partitions, simultaneous perturbation, backtracking adaptive search and the stochastic ruler method. Nº de ref. del artículo: 385644097
Cantidad disponible: Más de 20 disponibles
Librería: THE SAINT BOOKSTORE, Southport, Reino Unido
Paperback / softback. Condición: New. This item is printed on demand. New copy - Usually dispatched within 5-9 working days. Nº de ref. del artículo: C9781489977311
Cantidad disponible: Más de 20 disponibles
Librería: preigu, Osnabrück, Alemania
Taschenbuch. Condición: Neu. Simulation-Based Optimization | Parametric Optimization Techniques and Reinforcement Learning | Abhijit Gosavi | Taschenbuch | xxvi | Englisch | 2016 | Humana | EAN 9781489977311 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu. Nº de ref. del artículo: 103395409
Cantidad disponible: 5 disponibles
Librería: buchversandmimpf2000, Emtmannsberg, BAYE, Alemania
Taschenbuch. Condición: Neu. Neuware -Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduce the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques ¿ especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms.Key features of this revised and improved Second Edition include: Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search and meta-heuristics (simulated annealing, tabu search, and genetic algorithms) Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming(value and policy iteration) for discounted, average, and total reward performance metrics An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online) and convergence proofs, via Banach fixed point theory and Ordinary Differential EquationsThemed around three areas in separate sets of chapters ¿ Static Simulation Optimization, Reinforcement Learning and Convergence Analysis ¿ this book is written for researchers and students in the fields of engineering (industrial, systems,electrical and computer), operations research, computer science and applied mathematics.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 536 pp. Englisch. Nº de ref. del artículo: 9781489977311
Cantidad disponible: 2 disponibles
Librería: Biblios, Frankfurt am main, HESSE, Alemania
Condición: New. PRINT ON DEMAND pp. 508. Nº de ref. del artículo: 18375054681
Cantidad disponible: 4 disponibles
Librería: AHA-BUCH GmbH, Einbeck, Alemania
Taschenbuch. Condición: Neu. Druck auf Anfrage Neuware - Printed after ordering - Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduce the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques - especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms.Key features of this revised and improved Second Edition include: Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search and meta-heuristics (simulated annealing, tabu search, and genetic algorithms) Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming(value and policy iteration) for discounted, average, and total reward performance metrics An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online) and convergence proofs, via Banach fixed point theory and Ordinary Differential EquationsThemed around three areas in separate sets of chapters - Static Simulation Optimization, Reinforcement Learning and Convergence Analysis - this book is written for researchers and students in the fields of engineering (industrial, systems,electrical and computer), operations research, computer science and applied mathematics. Nº de ref. del artículo: 9781489977311
Cantidad disponible: 1 disponibles