Date of Award:
Master of Science (MS)
Ensuring the safe integration of autonomous vehicles into real-world environments requires a comprehensive understanding of pedestrian behavior. This study addresses the challenge of predicting the movement and crossing intentions of pedestrians, a crucial aspect in the development of fully autonomous vehicles.
The research focuses on leveraging Honda's TITAN dataset, comprising 700 unique clips captured by moving vehicles in high-foot-traffic areas of Tokyo, Japan. Each clip provides detailed contextual information, including human-labeled tags for individuals and vehicles, encompassing attributes such as age, motion status, and communicative actions. Long Short-Term Memory (LSTM) networks were employed and trained on various combinations of contextual data, along with basic bounding box coordinates. The best-performing models were identified based on mean squared error (MSE) for each prediction. Subsequently, decision trees were trained using the MSE data to classify the contextual features that consistently contributed to high or low prediction errors.
This project sheds light on the significance of contextual behavioral data in predicting pedestrian motion and intention. By analyzing the impact of age, motion status, communicative actions, and other factors on prediction accuracy, the study offers valuable insights into the key elements that autonomous vehicles should consider when anticipating pedestrian movements in real-world settings. Ultimately, this research contributes to advancing the development of robust and safe autonomous vehicle systems by identifying crucial contextual cues for accurate pedestrian pathing predictions.
Bingham, Laurel, "Pedestrian Pathing Prediction Using Complex Contextual Behavioral Data in High Foot Traffic Settings" (2024). All Graduate Theses and Dissertations, Fall 2023 to Present. 106.
Copyright for this work is retained by the student. If you have any questions regarding the inclusion of this work in the Digital Commons, please email us at .