|
The Markov Decision
Process (MDP) adds actions to the Markov chain.
The model consists of states, actions, events, and decisions.
Optionally, state blocks and decision blocks may also be
included. The first three pages of this DP Models section
describes a MDP model, so we will not repeat the development
here. Further examples can be found by following
the links in the table below. The links
direct you to pages in the DP Examples and DP
Data sections of the dynamic programming collection.
|