Multi-modal Land Cover Classification of Historical Aerial Images and Topographic Maps: A Comparative Study

Guardado en:
Detalles Bibliográficos
Publicado en:ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences vol. X-4-2024 (2024), p. 107
Autor principal: Dorozynski, Mareike
Otros Autores: Rottensteiner, Franz, Thiemann, Frank, Sester, Monika, Dahms, Thorsten, Hovenbitzer, Michael
Publicado:
Copernicus GmbH
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Knowledge about land cover is relevant for many different applications such as updating topographic information systems, monitoring the environment, and planning future land cover. Particularly for monitoring, it is of interest to be not only aware of current land cover but of past land cover at different epochs, too. To allow for efficient, computer-aided spatio-temporal analysis, digital land cover information is required explicitly. In this context, historic aerial orthophotos and scanned historic topographic maps can serve as sources of information, in which land cover information is contained implicitly. The present work aims to automatically extract land cover from this data using classification. Thus, a deep learning-based multi-modal classifier is proposed to exploit information from aerial imagery and maps simultaneously for land cover prediction. Two variants of the classifier are trained, utilizing a supervised training strategy, for building segmentation and vegetation segmentation, respectively. Both classifiers are evaluated on independent test sets and compared to their respective two uni-modal counterparts, i.e. an aerial image classifier and a map classifier. Thus, a mean F1-score of 62.2% for multi-modal building segmentation and a mean F1-score of 83.7% for multimodal vegetation segmentation can be achieved. Detailed analysis of quantitative and qualitative results gives hints for promising directions for future research of multi-modal classifiers to further improve the performance of the multi-modal classifier.
ISSN:2194-9042
2194-9050
DOI:10.5194/isprs-annals-X-4-2024-107-2024
Fuente:Engineering Database