Street View Imagery Analysis in a Computational Notebook

Authors

DOI:

https://doi.org/10.18335/region.v13i1.575

Abstract

Street view imagery, capturing detailed streetscapes at human eye-level, has received significant attention in the past decade. Street view images can be leveraged to observe the built environment from both element- and scene-levels. This chapter provides an introduction to methodologies for street view image-based analytics, including downloading street view images using Google Places API, and employing advanced computer vision techniques such as Deep Convolutional Neural Networks to detect and quantify urban elements and scenes. Particularly, this chapter introduces the use of image semantic segmentation to identify distinct urban elements, and image classification techniques to categorize and predict urban scene types. Further, an example of the calculation of the Green View Index (GVI) is provided to demonstrate how street view imagery analysis could contribute to urban data analytics. Through these methods, street view imagery not only helps model the digital environment, enhances our understanding of urban environments, but also offers a variety of insights into geography and urban studies.

cover_575

Downloads

Published

2026-03-06

How to Cite

Kang, Y. (2026) “Street View Imagery Analysis in a Computational Notebook”, REGION. Vienna, Austria, 13(1), pp. 43–60. doi: 10.18335/region.v13i1.575.

Issue

Section

Articles
Share