Artículos relacionados a Adaptive Dynamic Programming for Control: Algorithms...

Adaptive Dynamic Programming for Control: Algorithms and Stability (Communications and Control Engineering) - Tapa blanda

 
9781447158813: Adaptive Dynamic Programming for Control: Algorithms and Stability (Communications and Control Engineering)

Sinopsis

There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods:
• infinite-horizon control for which the difficulty of solving partial differential Hamilton–Jacobi–Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences;
• finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control;
• nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point.
Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium.
In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time:
• establishes the fundamental theory involved clearly with each chapter devoted to aclearly identifiable control paradigm;
• demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and
• shows how ADP methods can be put to use both in simulation and in real applications.
This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.

"Sinopsis" puede pertenecer a otra edición de este libro.

De la contraportada

There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming for Control approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods:
infinite-horizon control for which the difficulty of solving partial differential Hamilton Jacobi Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences;
finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinte-horizon control;
nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point.
Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium.
In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming for Control:
establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm;
demonstrates convergence proofs of the ADP algorithms to deepen undertstanding of the derivation of stability and convergence with the iterative computational methods used; and
shows how ADP methods can be put to use both in simulation and in real applications.
This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.

The Communications and Control Engineering series reports major technological advances which have potential for great impact in the fields of communication and control. It reflects research in industrial and academic institutions around the world so that the readership can exploit new possibilities as they become available.

"Sobre este título" puede pertenecer a otra edición de este libro.

Comprar usado

Condición: Como Nuevo
Like New
Ver este artículo

EUR 28,80 gastos de envío desde Reino Unido a España

Destinos, gastos y plazos de envío

Comprar nuevo

Ver este artículo

EUR 19,49 gastos de envío desde Alemania a España

Destinos, gastos y plazos de envío

Otras ediciones populares con el mismo título

Resultados de la búsqueda para Adaptive Dynamic Programming for Control: Algorithms...

Imagen de archivo

Huaguang Zhang|Derong Liu|Yanhong Luo|Ding Wang
Publicado por Springer London, 2015
ISBN 10: 1447158814 ISBN 13: 9781447158813
Nuevo Tapa blanda
Impresión bajo demanda

Librería: moluna, Greven, Alemania

Calificación del vendedor: 5 de 5 estrellas Valoración 5 estrellas, Más información sobre las valoraciones de los vendedores

Condición: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Convergence proofs of the algorithms presented teach readers how to derive necessary stability and convergence criteria for their own systems Establishes the fundamentals of ADP theory so that student readers can extrapolate their learning into co. Nº de ref. del artículo: 447761134

Contactar al vendedor

Comprar nuevo

EUR 136,16
Convertir moneda
Gastos de envío: EUR 19,49
De Alemania a España
Destinos, gastos y plazos de envío

Cantidad disponible: Más de 20 disponibles

Añadir al carrito

Imagen de archivo

Zhang, Huaguang; Liu, Derong; Luo, Yanhong; Wang, Ding
Publicado por Springer, 2015
ISBN 10: 1447158814 ISBN 13: 9781447158813
Nuevo Tapa blanda

Librería: Ria Christie Collections, Uxbridge, Reino Unido

Calificación del vendedor: 5 de 5 estrellas Valoración 5 estrellas, Más información sobre las valoraciones de los vendedores

Condición: New. In. Nº de ref. del artículo: ria9781447158813_new

Contactar al vendedor

Comprar nuevo

EUR 159,14
Convertir moneda
Gastos de envío: EUR 5,17
De Reino Unido a España
Destinos, gastos y plazos de envío

Cantidad disponible: Más de 20 disponibles

Añadir al carrito

Imagen del vendedor

Huaguang Zhang
Publicado por Springer London Jan 2015, 2015
ISBN 10: 1447158814 ISBN 13: 9781447158813
Nuevo Taschenbuch
Impresión bajo demanda

Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania

Calificación del vendedor: 5 de 5 estrellas Valoración 5 estrellas, Más información sobre las valoraciones de los vendedores

Taschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: - infinite-horizon control for which the difficulty of solving partial differential Hamilton-Jacobi-Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; - finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control; - nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point. Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium. In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: - establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm; - demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and - shows how ADP methods can be put to use both in simulation and in real applications. This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study. 440 pp. Englisch. Nº de ref. del artículo: 9781447158813

Contactar al vendedor

Comprar nuevo

EUR 160,49
Convertir moneda
Gastos de envío: EUR 11,00
De Alemania a España
Destinos, gastos y plazos de envío

Cantidad disponible: 2 disponibles

Añadir al carrito

Imagen del vendedor

Huaguang Zhang
Publicado por Springer London, 2015
ISBN 10: 1447158814 ISBN 13: 9781447158813
Nuevo Taschenbuch

Librería: AHA-BUCH GmbH, Einbeck, Alemania

Calificación del vendedor: 5 de 5 estrellas Valoración 5 estrellas, Más información sobre las valoraciones de los vendedores

Taschenbuch. Condición: Neu. Druck auf Anfrage Neuware - Printed after ordering - There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: - infinite-horizon control for which the difficulty of solving partial differential Hamilton-Jacobi-Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; - finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control; - nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point. Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium. In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: - establishes the fundamental theory involved clearly with each chapter devoted to aclearly identifiable control paradigm; - demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and - shows how ADP methods can be put to use both in simulation and in real applications. This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study. Nº de ref. del artículo: 9781447158813

Contactar al vendedor

Comprar nuevo

EUR 164,49
Convertir moneda
Gastos de envío: EUR 11,99
De Alemania a España
Destinos, gastos y plazos de envío

Cantidad disponible: 1 disponibles

Añadir al carrito

Imagen del vendedor

Huaguang Zhang
ISBN 10: 1447158814 ISBN 13: 9781447158813
Nuevo Taschenbuch

Librería: buchversandmimpf2000, Emtmannsberg, BAYE, Alemania

Calificación del vendedor: 5 de 5 estrellas Valoración 5 estrellas, Más información sobre las valoraciones de los vendedores

Taschenbuch. Condición: Neu. Neuware -There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods:¿ infinite-horizon control for which the difficulty of solving partial differential Hamilton¿Jacobi¿Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences;¿ finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control;¿ nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point.Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium.In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time:¿ establishes the fundamental theory involved clearly with each chapter devoted to aclearly identifiable control paradigm;¿ demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and¿ shows how ADP methods can be put to use both in simulation and in real applications.This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 440 pp. Englisch. Nº de ref. del artículo: 9781447158813

Contactar al vendedor

Comprar nuevo

EUR 160,49
Convertir moneda
Gastos de envío: EUR 35,00
De Alemania a España
Destinos, gastos y plazos de envío

Cantidad disponible: 2 disponibles

Añadir al carrito

Imagen de archivo

Zhang, Huaguang; Liu, Derong; Luo, Yanhong; Wang, Ding
Publicado por Springer, 2015
ISBN 10: 1447158814 ISBN 13: 9781447158813
Nuevo Tapa blanda

Librería: Lucky's Textbooks, Dallas, TX, Estados Unidos de America

Calificación del vendedor: 5 de 5 estrellas Valoración 5 estrellas, Más información sobre las valoraciones de los vendedores

Condición: New. Nº de ref. del artículo: ABLIING23Mar2411530317417

Contactar al vendedor

Comprar nuevo

EUR 157,87
Convertir moneda
Gastos de envío: EUR 64,23
De Estados Unidos de America a España
Destinos, gastos y plazos de envío

Cantidad disponible: Más de 20 disponibles

Añadir al carrito

Imagen de archivo

Zhang, Huaguang, Liu, Derong, Luo, Yanhong, Wang, Ding
Publicado por Springer, 2015
ISBN 10: 1447158814 ISBN 13: 9781447158813
Antiguo o usado Paperback

Librería: Mispah books, Redhill, SURRE, Reino Unido

Calificación del vendedor: 4 de 5 estrellas Valoración 4 estrellas, Más información sobre las valoraciones de los vendedores

Paperback. Condición: Like New. Like New. book. Nº de ref. del artículo: ERICA77314471588146

Contactar al vendedor

Comprar usado

EUR 250,35
Convertir moneda
Gastos de envío: EUR 28,80
De Reino Unido a España
Destinos, gastos y plazos de envío

Cantidad disponible: 1 disponibles

Añadir al carrito