Date of Award:

12-2021

Document Type:

Dissertation

Degree Name:

Doctor of Philosophy (PhD)

Department:

Mathematics and Statistics

Committee Chair(s)

Adele Cutler

Committee

Adele Cutler

Committee

Janis L. Boettinger

Committee

Sandra Gillam

Committee

Kevin Moon

Committee

Daniel Coster

Abstract

Satellite images, given that they are taking images of the ground, have been shown to be an effective tool for generating images that are from a street-view perspective. Unfortunately, because of the high-angle or possible cloud cover it can be difficult to generate the street-level views. Where the satellite images fall short though, there may be other types of data that can be associated with a point on the ground to support any shortcomings. In this work we propose a novel extension of an existing machine learning model that generates images purely from text, to allow for both satellite images and text descriptions to be given as input to generate the street-level images. We also propose a new way to evaluate how good our machine learning model is that would easily work for multiple input data types. Models are trained and evaluated on satellite imagery from the WorldView-3 satellite, Wikipedia text descriptions of cities, and OpenStreetCam dashboard camera style imagery.

Checksum

19396939eaa62cc3043e35e86085751a

Available for download on Tuesday, December 01, 2026

Included in

Mathematics Commons

Share

COinS