Date of Award:
12-2021
Document Type:
Dissertation
Degree Name:
Doctor of Philosophy (PhD)
Department:
Computer Science
Committee Chair(s)
Vicki H. Allan
Committee
Vicki H. Allan
Committee
Curtis Dyreson
Committee
David Paper
Committee
Mario Harper
Committee
Chad Mano
Abstract
This research focuses on intelligent traffic management including stochastic path planning and city scale traffic optimization. Stochastic path planning focuses on finding paths when edge weights are not fixed and change depending on the time of day/week. Then we focus on minimizing the running time of the overall procedure at query time utilizing precomputation and approximation. The city graph is partitioned into smaller groups of nodes and represented by its exemplar. In query time, source and destination pairs are connected to their respective exemplars and the path between those exemplars is found. After this, we move toward minimizing the city wide traffic congestion by making structural changes include changing the number of lanes, using ramp metering, varying speed limit, and modifying signal timing is possible. We propose a multi agent reinforcement learning (RL) framework for improving traffic flow in city networks. Our framework utilizes two level learning: a) each single agent learns the initial policy and b) multiple agents (changing the environment at the same time) update their policy based on the interaction with the dynamic environment and in agreement with other agents. The goal of RL agents is to interact with the environment to learn the optimal modification for each road segment through maximizing the cumulative reward over the set of possible actions in state space.
Checksum
78873163cd4e548c9c36cd2e5e5ff87c
Recommended Citation
Ahmadi, Kamilia, "Intelligent Traffic Management: From Practical Stochastic Path Planning to Reinforcement Learning Based City-Wide Traffic Optimization" (2021). All Graduate Theses and Dissertations, Spring 1920 to Summer 2023. 8327.
https://digitalcommons.usu.edu/etd/8327
Included in
Copyright for this work is retained by the student. If you have any questions regarding the inclusion of this work in the Digital Commons, please email us at .