Date of Award:
5-2010
Document Type:
Thesis
Degree Name:
Master of Science (MS)
Department:
Computer Science
Committee Chair(s)
Nicholas S. Flann
Committee
Nicholas S. Flann
Committee
Donald H. Cooley
Committee
Dan W. Watson
Abstract
Reinforcement learning techniques offer a very powerful method of finding solutions in unpredictable problem environments where human supervision is not possible. However, in many real world situations, the state space needed to represent the solutions becomes so large that using these methods becomes infeasible. Often the vast majority of these states are not valuable in finding the optimal solution. This work introduces a novel method of using linear programming to identify and represent the small area of the state space that is most likely to lead to a near-optimal solution, significantly reducing the memory requirements and time needed to arrive at a solution. An empirical study is provided to show the validity of this method with respect to a specific problem in vehicle dispatching. This study demonstrates that, in problems that are too large for a traditional reinforcement learning agent, this new approach yields solutions that are a significant improvement over other nonlearning methods. In addition, this new method is shown to be robust to changing conditions both during training and execution. Finally, some areas of future work are outlined to introduce how this new approach might be applied to additional problems and environments.
Checksum
a9fd447cb77f3517addf1c9bcb85b854
Recommended Citation
Burton, Scott H., "Coping with the Curse of Dimensionality by Combining Linear Programming and Reinforcement Learning" (2010). All Graduate Theses and Dissertations, Spring 1920 to Summer 2023. 559.
https://digitalcommons.usu.edu/etd/559
Included in
Copyright for this work is retained by the student. If you have any questions regarding the inclusion of this work in the Digital Commons, please email us at .