Zur Kurzanzeige

dc.identifier.uri http://dx.doi.org/10.15488/12429
dc.identifier.uri https://www.repo.uni-hannover.de/handle/123456789/12528
dc.contributor.author Narisetti, Narendra
dc.contributor.author Henke, Michael
dc.contributor.author Seiler, Christiane
dc.contributor.author Junker, Astrid
dc.contributor.author Ostermann, Jörn
dc.contributor.author Altmann, Thomas
dc.contributor.author Gladilin, Evgeny
dc.date.accessioned 2022-07-07T08:09:55Z
dc.date.available 2022-07-07T08:09:55Z
dc.date.issued 2021
dc.identifier.citation Narisetti, N.; Henke, M.; Seiler, C.; Junker, A.; Ostermann, J. et al.: Fully-automated root image analysis (faRIA). In: Scientific Reports 11 (2021), Nr. 1, 16047. DOI: https://doi.org/10.1038/s41598-021-95480-y
dc.description.abstract High-throughput root phenotyping in the soil became an indispensable quantitative tool for the assessment of effects of climatic factors and molecular perturbation on plant root morphology, development and function. To efficiently analyse a large amount of structurally complex soil-root images advanced methods for automated image segmentation are required. Due to often unavoidable overlap between the intensity of fore- and background regions simple thresholding methods are, generally, not suitable for the segmentation of root regions. Higher-level cognitive models such as convolutional neural networks (CNN) provide capabilities for segmenting roots from heterogeneous and noisy background structures, however, they require a representative set of manually segmented (ground truth) images. Here, we present a GUI-based tool for fully automated quantitative analysis of root images using a pre-trained CNN model, which relies on an extension of the U-Net architecture. The developed CNN framework was designed to efficiently segment root structures of different size, shape and optical contrast using low budget hardware systems. The CNN model was trained on a set of 6465 masks derived from 182 manually segmented near-infrared (NIR) maize root images. Our experimental results show that the proposed approach achieves a Dice coefficient of 0.87 and outperforms existing tools (e.g., SegRoot) with Dice coefficient of 0.67 by application not only to NIR but also to other imaging modalities and plant species such as barley and arabidopsis soil-root images from LED-rhizotron and UV imaging systems, respectively. In summary, the developed software framework enables users to efficiently analyse soil-root images in an automated manner (i.e. without manual interaction with data and/or parameter tuning) providing quantitative plant scientists with a powerful analytical tool. © 2021, The Author(s). eng
dc.language.iso eng
dc.publisher London : Nature Publishing Group
dc.relation.ispartofseries Scientific Reports 11 (2021), Nr. 1
dc.rights CC BY 4.0 Unported
dc.rights.uri https://creativecommons.org/licenses/by/4.0/
dc.subject Segmentation eng
dc.subject Architecture eng
dc.subject Growth eng
dc.subject Rhizo eng
dc.subject Tool eng
dc.subject.ddc 500 | Naturwissenschaften ger
dc.subject.ddc 600 | Technik ger
dc.title Fully-automated root image analysis (faRIA)
dc.type Article
dc.type Text
dc.relation.essn 2045-2322
dc.relation.doi https://doi.org/10.1038/s41598-021-95480-y
dc.bibliographicCitation.issue 1
dc.bibliographicCitation.volume 11
dc.bibliographicCitation.firstPage 16047
dc.description.version publishedVersion
tib.accessRights frei zug�nglich


Die Publikation erscheint in Sammlung(en):

Zur Kurzanzeige

 

Suche im Repositorium


Durchblättern

Mein Nutzer/innenkonto

Nutzungsstatistiken