Date of Award:
Doctor of Philosophy (PhD)
Mathematics and Statistics
Janis L. Boettinger
Satellite images, given that they are taking images of the ground, have been shown to be an effective tool for generating images that are from a street-view perspective. Unfortunately, because of the high-angle or possible cloud cover it can be difficult to generate the street-level views. Where the satellite images fall short though, there may be other types of data that can be associated with a point on the ground to support any shortcomings. In this work we propose a novel extension of an existing machine learning model that generates images purely from text, to allow for both satellite images and text descriptions to be given as input to generate the street-level images. We also propose a new way to evaluate how good our machine learning model is that would easily work for multiple input data types. Models are trained and evaluated on satellite imagery from the WorldView-3 satellite, Wikipedia text descriptions of cities, and OpenStreetCam dashboard camera style imagery.
Jones, Sharad, "Multi-Source AttnGAN for Ground-Level View Scene Generation" (2021). All Graduate Theses and Dissertations. 8248.
Available for download on Tuesday, December 01, 2026
Copyright for this work is retained by the student. If you have any questions regarding the inclusion of this work in the Digital Commons, please email us at .