Date of Award:
12-2021
Document Type:
Dissertation
Degree Name:
Doctor of Philosophy (PhD)
Department:
Mathematics and Statistics
Committee Chair(s)
Adele Cutler
Committee
Adele Cutler
Committee
D. Richard Cutler
Committee
Sandra Gillam
Committee
Kevin Moon
Committee
Daniel Coster
Abstract
Satellite images, given that they are taking images of the ground, have been shown to be an effective tool for generating images that are from a street-view perspective. Unfortunately, because of the high-angle or possible cloud cover it can be difficult to generate the street-level views. Where the satellite images fall short though, there may be other types of data that can be associated with a point on the ground to support any shortcomings. In this work we propose a novel extension of an existing machine learning model that generates images purely from text, to allow for both satellite images and text descriptions to be given as input to generate the street-level images. We also propose a new way to evaluate how good our machine learning model is that would easily work for multiple input data types. Models are trained and evaluated on satellite imagery from the WorldView-3 satellite, Wikipedia text descriptions of cities, and OpenStreetCam dashboard camera style imagery.
Checksum
19396939eaa62cc3043e35e86085751a
Recommended Citation
Jones, Sharad, "Multi-Source AttnGAN for Ground-Level View Scene Generation" (2021). All Graduate Theses and Dissertations, Spring 1920 to Summer 2023. 8248.
https://digitalcommons.usu.edu/etd/8248
Included in
Copyright for this work is retained by the student. If you have any questions regarding the inclusion of this work in the Digital Commons, please email us at .