Dynamic Programming and Optimal Control (2 Vol Set)

Price 127.78 - 134.50 USD

EAN/UPC/ISBN Code 9781886529083


A two-volume set, consisting of the latest editions of the two volumes (3rd edition (2005) for Vol. I, and 4th edition (2012) for Vol. II). Much supplementary material can be found at the book"s web page. The first volume is oriented towards modeling, conceptualization, and finite-horizon problems, but also includes a substantive introduction to infinite horizon problems that is suitable for classroom use. The second volume is oriented towards mathematical analysis and computation, treats infinite horizon problems extensively, and provides an up-to-date account of approximate large-scale dynamic programming and reinforcement learning. This is a textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization. The treatment focuses on basic unifying themes, and conceptual foundations. It illustrates the versatility, power, and generality of the method with many examples and applications from engineering, operations research, and other fields. It also addresses extensively the practical application of the methodology, possibly through the use of approximations, and provides an introduction to the methodology of Neuro-Dynamic Programming, which is the focus of much recent research.