dc.creator |
Almudevar, Anthony |
|
dc.date.accessioned |
2013-08-20T16:00:26Z |
|
dc.date.available |
2013-08-20T16:00:26Z |
|
dc.date.issued |
2001 |
|
dc.identifier.issn |
0363-0129 |
|
dc.identifier.issn |
1095-7138 |
|
dc.identifier.uri |
http://library2.smu.ca/xmlui/handle/01/25065 |
|
dc.description |
Publisher's version/PDF |
|
dc.description.abstract |
A piecewise deterministic Markov process (PDP) is a continuous time Markov process consisting of continuous, deterministic trajectories interrupted by random jumps. The trajectories may be controlled with the object of minimizing the expected costs associated with the process. A method of representing this controlled PDP as a discrete time decision process is presented, allowing the value function for the problem to be expressed as the fixed point of a dynamic programming operator. Decisions take the form of trajectory segments. The expected costs may then be minimized through a dynamic programming algorithm, rather than through the solution of the Bellman–Hamilton–Jacobi equation, assuming the trajectory segments are numerically tractable. The technique is applied to the optimal capacity expansion problem, that is, the problem of planning the construction of new production facilities to meet rising demand. |
en_CA |
dc.description.provenance |
Submitted by Trish Grelot (trish.grelot@smu.ca) on 2013-08-20T16:00:26Z
No. of bitstreams: 1
almudevar_anthony_article_2001.pdf: 158241 bytes, checksum: e9d21cb34dafc007a9ff630669f7d26c (MD5) |
en |
dc.description.provenance |
Made available in DSpace on 2013-08-20T16:00:26Z (GMT). No. of bitstreams: 1
almudevar_anthony_article_2001.pdf: 158241 bytes, checksum: e9d21cb34dafc007a9ff630669f7d26c (MD5)
Previous issue date: 2001 |
en |
dc.language.iso |
en |
en_CA |
dc.publisher |
Society for Industrial and Applied Mathematics |
en_CA |
dc.title |
A dynamic programming algorithm for the optimal control of piecewise deterministic Markov processes |
en_CA |
dc.type |
Text |
en_CA |
dcterms.bibliographicCitation |
SIAM Journal on Control and Optimization 40(2), 525-539. (2001) |
|