Generating Virtual Training Labels for Crop Classification from Fused Sentinel-1 and Sentinel-2 Time Series

Zur Kurzanzeige

dc.identifier.uri http://dx.doi.org/10.15488/16476
dc.identifier.uri https://www.repo.uni-hannover.de/handle/123456789/16603
dc.contributor.author Teimouri, Maryam
dc.contributor.author Mokhtarzade, Mehdi
dc.contributor.author Baghdadi, Nicolas
dc.contributor.author Heipke, Christian
dc.date.accessioned 2024-03-04T08:07:43Z
dc.date.available 2024-03-04T08:07:43Z
dc.date.issued 2023
dc.identifier.citation Teimouri, M.; Mokhtarzade, M.; Baghdadi, N.; Heipke, C.: Generating Virtual Training Labels for Crop Classification from Fused Sentinel-1 and Sentinel-2 Time Series. In: Journal of Photogrammetry, Remote Sensing and Geoinformation Science (PFG) 91 (2023), S. 413-423. DOI: https://doi.org/10.1007/s41064-023-00256-w
dc.description.abstract Convolutional neural networks (CNNs) have shown results superior to most traditional image understanding approaches in many fields, incl. crop classification from satellite time series images. However, CNNs require a large number of training samples to properly train the network. The process of collecting and labeling such samples using traditional methods can be both, time-consuming and costly. To address this issue and improve classification accuracy, generating virtual training labels (VTL) from existing ones is a promising solution. To this end, this study proposes a novel method for generating VTL based on sub-dividing the training samples of each crop using self-organizing maps (SOM), and then assigning labels to a set of unlabeled pixels based on the distance to these sub-classes. We apply the new method to crop classification from Sentinel images. A three-dimensional (3D) CNN is utilized for extracting features from the fusion of optical and radar time series. The results of the evaluation show that the proposed method is effective in generating VTL, as demonstrated by the achieved overall accuracy (OA) of 95.3% and kappa coefficient (KC) of 94.5%, compared to 91.3% and 89.9% for a solution without VTL. The results suggest that the proposed method has the potential to enhance the classification accuracy of crops using VTL. eng
dc.language.iso eng
dc.publisher [Cham] : Springer International Publishing
dc.relation.ispartofseries Journal of Photogrammetry, Remote Sensing and Geoinformation Science (PFG) 91 (2023)
dc.rights CC BY 4.0 Unported
dc.rights.uri https://creativecommons.org/licenses/by/4.0
dc.subject 3D-CNN eng
dc.subject Crop classification eng
dc.subject Fusion eng
dc.subject Optical and radar image time series eng
dc.subject Virtual training labels eng
dc.subject.ddc 550 | Geowissenschaften
dc.title Generating Virtual Training Labels for Crop Classification from Fused Sentinel-1 and Sentinel-2 Time Series eng
dc.type Article
dc.type Text
dc.relation.essn 2512-2819
dc.relation.issn 2512-2789
dc.relation.doi https://doi.org/10.1007/s41064-023-00256-w
dc.bibliographicCitation.volume 91
dc.bibliographicCitation.firstPage 413
dc.bibliographicCitation.lastPage 423
dc.description.version publishedVersion
tib.accessRights frei zug�nglich


Die Publikation erscheint in Sammlung(en):

Zur Kurzanzeige

 

Suche im Repositorium


Durchblättern

Mein Nutzer/innenkonto

Nutzungsstatistiken