Parallel Algorithms for Processing Hydrologic Properties from Digital Terrain
Document Type
Conference Paper
Journal/Book Title/Conference
GIScience 2010, Sixth international conference on Geographic Information Science
Location
Zurich, Switzerland
Publication Date
9-14-2010
Abstract
A Digital Elevation Model (DEM) represents land surface topography using a rectangular raster grid, where each raster cell contains a floating point value equivalent to the elevation of that geographic point above some base value (usually, sea level) (Wilson and Gallant, 2000). DEMs have become a vital component of the hydrologic modeling process and are used for a number of purposes including distributed hydrologic modelling (Kampf and Burges, 2007) and floodplain mapping (NRC, 2007). A revolution in the ability to collect elevation data has created a drastic improvement in the quality and quantity of DEM data. Ground resolution of the raster cells has improved from 30-100 meter per raster cell 5-10 years ago to 1-5 meter resolutions today for much of the Earth’s land surface. This accuracy has increased the size of the DEMs used for hydrologic purposes. For instance, to represent the Provo River basin in central Utah, 1.73e5 hectares (673 mi2 ) at 90-meter posting intervals requires 4.7e5 raster cells; at 30-meter resolution, 4.2e6 cells; and at 10-meter resolution, 3.8e7 cells. Because of the increase in raster size, many of the analysis techniques used for coarser resolutions and smaller DEMs are prohibitively time consuming when being applied to high-resolution data. Although increases in computer processor speed and memory and disk availability have helped enable working with this large data there is a need to adapt hydrologic algorithms to exploit new parallel processing capability and architectural functionality. For example Arge et. al. (2002) examined ways to frame key hydrologic terrain analysis algorithms to take advantage of transparent parallel I/O systems to overcome some of the I/O bottlenecks that occur when processing large terrain datasets in single CPU environments. Arge et al. (2002) implemented single flow direction flow routing (including pit removal) and flow accumulation and showed that efficient algorithms designed to optimize the reading and writing of blocks of data between memory and disks based on system component properties can significantly improve processing times. This paper describes parallel algorithms that have been developed to enhance hydrologic terrain processing so that larger datasets can be more efficiently computed. By physically distributing the hydrologic processing for a single dataset among compute nodes in a cluster based system or even a multi core desktop computer, considerable speedup is achieved by simultaneous processing of different portions of the domain. On a cluster based system this approach also takes advantage of the large aggregate memory of all the compute nodes working together. Message Passing Interface (MPI) parallel implementations have been developed for pit removal, flow direction, and generalized flow accumulation methods within the Terrain Analysis Using Digital Elevation Models (TauDEM) package (Tarboton and Baker, 2008; Tarboton et al., 2009).
Recommended Citation
Wallace, R. M.; Tarboton, David G.; Watson, Daniel W.; Schreuders, Kimberly A. T.; and Tesfa, Teklu K., "Parallel Algorithms for Processing Hydrologic Properties from Digital Terrain" (2010). Civil and Environmental Engineering Faculty Publications. Paper 2554.
https://digitalcommons.usu.edu/cee_facpub/2554