'Remotely Sensed': 2D transposed onto 3D
The goal of this visualization was to combine remotely sensed data into a cartographic product that would be physical.
In order to achieve this, we selected an area based on satellite imagery and digital elevation model (DEM) data availability. We chose a section of land stretching from Puget Sound across Pierce County to Mount Rainier and the beginnings of the Cascade Mountain Range.
90m DEM data was acquired from:
- Jarvis A., H.I. Reuter, A. Nelson, E. Guevara, 2008, Hole-filled seamless SRTM data V4, International Centre for Tropical Agriculture (CIAT), available from
http://srtm.csi.cgiar.org
Printing the Models
- The DEM was clipped according to the study area and 3d-printer size limtitations, and converted to STL format using 1.5x vertical exaggeration and 0.25 mm spacing
- The STL was ported into Blender, an open-source 3d modellling program
- In Blender, scaling issues were resolved and the file underwent preparation for printing
- Two printers (a MakerBot and FlashForge Creator Pro) were used to print two sets of four models. The MakerBot used a 0.1 spacing STL. The FlashForge had additional settings configured, including a 0.9 mm roof thickness to improve the quality of the final print
Classifying the terrain
- On following tutorial guidelines on how to create a deep neural network model that accepts an aerial image as input and returns a land cover label for every pixel in the image.
- A trained model of 250 epochs was applied to an approximately 1km x 1km region centered at point in Pierce County, WA.
- Our evaluation data consist of a pair of files not used during training: a NAIP aerial image from USGS and ground-truth land cover labels from USDA.
- On loading this pair of files, we match the transposition, normalization, and label-grouping strategies.
- The trained model takes an input with dimensions 256 pixels x 256 pixels, and produces an output with dimensions 128 x 128 (corresponding to the center of the input region). To get output labels for the entire region of interest, the evaluation script must therefore use a tiling strategy.
- Our trained model outputs predictions for five classes, which we will visualize using the following color code:
- No data: black
- Water: blue
- Trees: dark green
- Herbaceous: light green
- Barren/impervious: gray
- Below are two images one representing visual of the region of interest(aerial image),the next one is visual of predicted land cover label.