0去购物车结算
购物车中还没有商品,赶紧选购吧!
当前位置: > 单控制器与多控制器自适应动态规划(英文版)

相同作者的商品

相同语种的商品

浏览历史

单控制器与多控制器自适应动态规划(英文版)


联系编辑
 
标题:
 
内容:
 
联系方式:
 
  
单控制器与多控制器自适应动态规划(英文版)
  • 书号:9787030605276
    作者:宋睿卓,魏庆来,李擎
  • 外文书名:
  • 装帧:平装
    开本:B5
  • 页数:269
    字数:350000
    语种:en
  • 出版社:科学出版社
    出版时间:1900-01-01
  • 所属分类:
  • 定价: ¥168.00元
    售价: ¥168.00元
  • 图书介质:
    按需印刷

  • 购买数量: 件  可供
  • 商品总价:

相同系列
全选

内容介绍

样章试读

用户评论

全部咨询

样章试读
  • 暂时还没有任何用户评论
总计 0 个记录,共 1 页。 第一页 上一页 下一页 最末页

全部咨询(共0条问答)

  • 暂时还没有任何用户咨询内容
总计 0 个记录,共 1 页。 第一页 上一页 下一页 最末页
用户名: 匿名用户
E-mail:
咨询内容:

目录

  • Contents
    1 Introduction 1
    1.1 Optimal Control 1
    1.1.1 Continuous-Time LQR 1
    1.1.2 Discrete-Time LQR 2
    1.2 Adaptive Dynamic Programming 3
    1.3 Review of Matrix Algebra 5
    References 6
    2 Neural-Network-BasedApproach for Finite-TimeOptimal Control 7
    2.1 Introduction 7
    2.2 Problem Formulation and Motivation 9
    2.3 The Data-Based Identifier 9
    2.4 Derivation of the Iterative ADP Algorithm with Convergence Analysis 11
    2.5 Neural Network Implementation of theIterative Control Algorithm 17
    2.6 Simulation Study 18
    2.7 Conclusion 20
    References 22
    3 Nearly Finite-HorizonOptimalControlfor Nonafiine Time-Delay Nonlinear Systems 25
    3.1 Introduction 25
    3.2 Problem Statement 26
    3.3 The Iteration ADP Algorithm and ItsConvergence 30
    3.3.1 The Novel ADP Iteration Algorithm 30
    3.3.2 Convergence Analysis of the Improved Iteration Algorithm 33
    3.3.3 Neural Network Implementation of the Iteration ADP Algorithm 38
    3.4 Simulation Study 40
    3.5 Conclusion 48
    References 48
    4 Multi-objective Optimal Control for Time-Delay Systems 49
    4.1 Introduction 49
    4.2 Problem Formulation 50
    4.3 Derivation of the ADP Algorithm for Time-Delay Systems 51
    4.4 Neural Network Implementation for the Multi-objective Optimal Control Problem of Time-Delay Systems 54
    4.5 Simulation Study 55
    4.6 Conclusion 61
    References 62
    5 Multiple Actor-Critic Optimal Control via ADP 63
    5.1 Introduction 63
    5.2 Problem Statement 65
    5.3 SIANN Architecture-Based Classification 66
    5.4 Optimal Control Based on ADP 69
    5.4.1 Model Neural Network 70
    5.4.2 Critic Network and Action Network 74
    5.5 Simulation Study 82
    5.6 Conclusion 91
    References 91
    6 Optimal Control for a Class of Complex-Valued Nonlinear Systems 95
    6.1 Introduction 95
    6.2 Motivations and Preliminaries 96
    6.3 ADP-Based Optimal Control Design 99
    6.3.1 Critic Network 99
    6.3.2 Action Network. 101
    6.3.3 Design of the Compensation Controller 102
    6.3.4 Stability Analysis 103
    6.4 Simulation Study 107
    6.5 Conclusion. 110
    References 110
    7 Off-Policy Neuro-Optimal Control for Unknown Complex-Valued Nonlinear Systems 113
    7.1 Introduction 113
    7.2 Problem Statement 114
    7.3 Off-Policy Optimal Control Method 115
    7.3.1 Convergence Analysis of Off-Policy PI Algorithm 117
    7.3.2 Implementation Method of Off-Policy Iteration Algorithm 119
    7.3.3 Implementation Process 122
    7.4 Simulation Study 122
    7.5 Conclusion 125
    References 125
    8 Approximation-Error-ADP-Based Optimal Tracking Control for Chaotic Systems 127
    8.1 Introduction 127
    8.2 Problem Formulation and Preliminaries 128
    8.3 Optimal Tracking Control Scheme Basedon Approximation-Error ADP Algorithm 130
    8.3.1 Description of Approximation-Error ADP Algorithm 130
    8.3.2 Convergence Analysis of the Iterative ADP Algorithm 132
    8.4 Simulation Study 136
    8.5 Conclusion 144
    References 144
    9 Off-Policy Actor-Critic Structure for Optimal Controlof Unknown Systems with Disturbances 147
    9.1 Introduction 147
    9.2 Problem Statement 148
    9.3 Off-Policy Actor-Critic Integral Reinforcement Learning 151
    9.3.1 On-Policy IRL for Nonzero Disturbance 151
    9.3.2 Off-Policy IRL for Nonzero Disturbance 152
    9.3.3 NN Approximation for Actor-Critic Structure 154
    9.4 Disturbance Compensation Redesign andStability Analysis 157
    9.4.1 Disturbance Compensation Off-Policy Controller Design 157
    9.4.2 Stability Analysis 158
    9.5 Simulation Study 161
    9.6 Conclusion 163
    References 163
    10 An Iterative ADP Method to Solve for a Class of Nonlinear Zero-Sum DifferentialGames 165
    10.1 Introduction 165
    10.2 Preliminaries and Assumptions 166
    10.3 Iterative Approximate Dynamic Programming Method for ZS Differential Games 169
    10.3.1 Derivation of the Iterative ADP Method 169
    10.3.2 The Procedure of theMethod 174
    10.3.3 The Properties of theIterativeADP Method 176
    10.4 Neural Network Implementation 190
    10.4.1 The Model Network 191
    10.4.2 The Critic Network 192
    10.4.3 The Action Network 193
    10.5 Simulation Study 195
    10.6 Conclusion 204
    References 204
    11 Neural-Network-Based Synchronous Iteration Learning Method for Multi-player Zero-Sum Games 207
    11.1 Introduction 207
    11.2 Motivations and Preliminaries 208
    11.3 Synchronous Solution of Multi-playerZSGames 213
    11.3.1 Derivation of Off-Policy Algorithm 213
    11.3.2 Implementation Method for Off-Policy Algorithm 214
    11.3.3 Stability Analysis 218
    11.4 Simulation Study 219
    11.5 Conclusion 224
    References 224
    12 Off-Policy Integral Reinforcement Learning Method for Multi-player Non-Zero-Sum Games 227
    12.1 Introduction 227
    12.2 Problem Statement 228
    12.3 Multi-player Learning PI SolutionforNZSGames 229
    12.4 Off-Policy Integral ReinforcementLearningMethod 234
    12.4.1 Derivation of Off-Policy Algorithm 234
    12.4.2 Implementation Method for Off-Policy Algorithm 236
    12.4.3 Stability Analysis 238
    12.5 Simulation Study 242
    12.6 Conclusion 248
    References 248
    13 Optimal Distributed Synchronization Control for Heterogeneous Multi-agent Graphical Games 251
    13.1 Introduction 251
    13.2 Graphsand Synchronization of Multi-agent Systems 252
    13.2.1 Graph Theory 252
    13.2.2 Synchronization and Tracking Error Dynamic Systems 253
    13.3 Optimal Distributed Cooperative Control for Multi-agent Differential Graphical Games 255
    13.3.1 Cooperative Performance Index Function. 256
    13.3.2 Nash Equilibrium 257
    13.4 Heterogeneous Multi-agent Differential Graphical Games by Iterative ADP Algorithm 259
    13.4.1 Derivation of the Heterogeneous Multi-agent DifferentialGraphical Games 259
    13.4.2 Properties of the Developed Policy Iteration Algorithm 260
    13.4.3 Heterogeneous Multi-agent Policy Iteration Algorithm 264
    13.5 Simulation Study 265
    13.6 Conclusion 269
    References 269
帮助中心
公司简介
联系我们
常见问题
新手上路
发票制度
积分说明
购物指南
配送方式
配送时间及费用
配送查询说明
配送范围
快递查询
售后服务
退换货说明
退换货流程
投诉或建议
版权声明
经营资质
营业执照
出版社经营许可证