Street View Imagery Analysis in a Computational Notebook
DOI:
https://doi.org/10.18335/region.v13i1.575Abstract
Street view imagery, capturing detailed streetscapes at human eye-level, has received significant attention in the past decade. Street view images can be leveraged to observe the built environment from both element- and scene-levels. This chapter provides an introduction to methodologies for street view image-based analytics, including downloading street view images using Google Places API, and employing advanced computer vision techniques such as Deep Convolutional Neural Networks to detect and quantify urban elements and scenes. Particularly, this chapter introduces the use of image semantic segmentation to identify distinct urban elements, and image classification techniques to categorize and predict urban scene types. Further, an example of the calculation of the Green View Index (GVI) is provided to demonstrate how street view imagery analysis could contribute to urban data analytics. Through these methods, street view imagery not only helps model the digital environment, enhances our understanding of urban environments, but also offers a variety of insights into geography and urban studies.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Yuhao Kang

This work is licensed under a Creative Commons Attribution 4.0 International License.
REGION is an open journal, and uses the standard Creative Commons license: Copyright We want authors to retain the maximum control over their work consistent with the first goal. For this reason, authors who publish in REGION will release their articles under the Creative Commons Attribution license. This license allows anyone to copy and distribute the article provided that appropriate attribution is given to REGION and the authors. For details of the rights authors grant users of their work, see the "human-readable summary" of the license, with a link to the full license. (Note that "you" refers to a user, not an author, in the summary.) Upon submission, the authors agree that the following three items are true: 1) The manuscript named above: a) represents valid work and neither it nor any other that I have written with substantially similar content has been published before in any form except as a preprint, b) is not concurrently submitted to another publication, and c) does not infringe anyone’s copyright. The Author(s) holds ERSA, WU, REGION, and the Editors of REGION harmless against all copyright claims. d) I have, or a coauthor has, had sufficient access to the data to verify the manuscript’s scientific integrity. 2) If asked, I will provide or fully cooperate in providing the data on which the manuscript is based so the editors or their assignees can examine it (where possible) 3) For papers with more than one author, I as the submitter have the permission of the coauthors to submit this work, and all authors agree that the corresponding author will be the main correspondent with the editorial office, and review the edited manuscript and proof. If there is only one author, I will be the corresponding author and agree to handle these responsibilities.



