Title
Pond-Hindsight: Applying Hindsight Optimization to Partially-Observable Markov Decision Processes
Date of Award:
5-2011
Document Type:
Thesis
Degree Name:
Master of Science (MS)
Department:
Computer Science
Committee Chair(s)
Daniel L. Bryce
Committee
Daniel L. Bryce
Committee
Vicki H. Allan
Committee
Daniel W. Watson
Abstract
Partially-observable Markov decision processes (POMDPs) are especially good at modeling real-world problems because they allow for sensor and effector uncertainty. Unfortunately, such uncertainty makes solving a POMDP computationally challenging. Traditional approaches, which are based on value iteration, can be slow because they find optimal actions for every possible situation. With the help of the Fast Forward (FF) planner, FF- Replan and FF-Hindsight have shown success in quickly solving fully-observable Markov decision processes (MDPs) by solving classical planning translations of the problem. This thesis extends the concept of problem determination to POMDPs by sampling action observations (similar to how FF-Replan samples action outcomes) and guiding the construction of policy trajectories with a conformant (as opposed to classical) planning heuristic. The resultant planner is called POND-Hindsight.
Checksum
a27029018f3345bb9414b03746ebb520
Recommended Citation
Olsen, Alan, "Pond-Hindsight: Applying Hindsight Optimization to Partially-Observable Markov Decision Processes" (2011). All Graduate Theses and Dissertations. 1035.
https://digitalcommons.usu.edu/etd/1035
Included in
Copyright for this work is retained by the student. If you have any questions regarding the inclusion of this work in the Digital Commons, please email us at .
Comments
This work is made publicly available electronically on September 29, 2011.