Date of Award:
Master of Science (MS)
Nicholas S. Flann
Reinforcement learning techniques offer a very powerful method of finding solutions in unpredictable problem environments where human supervision is not possible. However, in many real world situations, the state space needed to represent the solutions becomes so large that using these methods becomes infeasible. Often the vast majority of these states are not valuable in finding the optimal solution. This work introduces a novel method of using linear programming to identify and represent the small area of the state space that is most likely to lead to a near-optimal solution, significantly reducing the memory requirements and time needed to arrive at a solution. An empirical study is provided to show the validity of this method with respect to a specific problem in vehicle dispatching. This study demonstrates that, in problems that are too large for a traditional reinforcement learning agent, this new approach yields solutions that are a significant improvement over other nonlearning methods. In addition, this new method is shown to be robust to changing conditions both during training and execution. Finally, some areas of future work are outlined to introduce how this new approach might be applied to additional problems and environments.
Burton, Scott H., "Coping with the Curse of Dimensionality by Combining Linear Programming and Reinforcement Learning" (2010). All Graduate Theses and Dissertations. 559.
Copyright for this work is retained by the student. If you have any questions regarding the inclusion of this work in the Digital Commons, please email us at .