“Are Machines Better Than Humans in Image Tagging?” - A User Study Adds to the Puzzle

Show simple item record

dc.identifier.uri http://dx.doi.org/10.15488/9788
dc.identifier.uri https://www.repo.uni-hannover.de/handle/123456789/9845
dc.contributor.author Ewerth, Ralph ger
dc.contributor.author Springstein, Matthias ger
dc.contributor.author Phan-Vogtmann, Lo An ger
dc.contributor.author Schütze, Juliane ger
dc.contributor.editor Jose, Joemon M.
dc.contributor.editor Hauff, Claudia
dc.contributor.editor Altingovde, Ismail Sengor
dc.contributor.editor Song, Dawei
dc.contributor.editor Albakour, Dyaa
dc.contributor.editor Watt, Stuart
dc.contributor.editor Tait, John
dc.date.accessioned 2020-04-22T15:30:06Z
dc.date.available 2020-04-22T15:30:06Z
dc.date.issued 2017
dc.identifier.citation Ewerth, R.; Springstein, M.; Phan-Vogtmann, L.A.; Schütze, J.: “Are Machines Better Than Humans in Image Tagging?” - A User Study Adds to the Puzzle. In: Jose, J. et al. (Eds.): Advances in Information Retrieval : 39th European Conference on IR Research, ECIR 2017, Aberdeen, UK, April 8-13, 2017, Proceedings. Cham : Springer, 2017 (Lecture Notes in Computer Science ; 10193), S. 186-198. DOI: https://doi.org/10.1007/978-3-319-56608-5_15 ger
dc.description.abstract “Do machines perform better than humans in visual recognition tasks?” Not so long ago, this question would have been considered even somewhat provoking and the answer would have been clear: “No”. In this paper, we present a comparison of human and machine performance with respect to annotation for multimedia retrieval tasks. Going beyond recent crowdsourcing studies in this respect, we also report results of two extensive user studies. In total, 23 participants were asked to annotate more than 1000 images of a benchmark dataset, which is the most comprehensive study in the field so far. Krippendorff’s α is used to measure inter-coder agreement among several coders and the results are compared with the best machine results. The study is preceded by a summary of studies which compared human and machine performance in different visual and auditory recognition tasks. We discuss the results and derive a methodology in order to compare machine performance in multimedia annotation tasks at human level. This allows us to formally answer the question whether a recognition problem can be considered as solved. Finally, we are going to answer the initial question. ger
dc.language.iso eng ger
dc.publisher Cham : Springer
dc.relation.ispartof Advances in Information Retrieval : 39th European Conference on IR Research, ECIR 2017, Aberdeen, UK, April 8-13, 2017, Proceedings ger
dc.relation.ispartofseries Lecture Notes in Computer Science ; 10193
dc.rights CC BY 4.0 Unported ger
dc.rights.uri https://creativecommons.org/licenses/by/4.0/ ger
dc.subject Human Performance eng
dc.subject Machine Performance eng
dc.subject Convolutional Neural Network eng
dc.subject Mean Average Precision eng
dc.subject Ground Truth Data eng
dc.subject.classification Konferenzschrift ger
dc.subject.ddc 020 | Bibliotheks- und Informationswissenschaft ger
dc.subject.ddc 004 | Informatik ger
dc.title “Are Machines Better Than Humans in Image Tagging?” - A User Study Adds to the Puzzle ger
dc.type bookPart
dc.type Text
dc.relation.essn 1611-3349
dc.relation.isbn 978-3-319-56607-8
dc.relation.issn 0302-9743
dc.relation.doi 10.1007/978-3-319-56608-5_15
dc.description.version publishedVersion ger
tib.accessRights frei zug�nglich


Files in this item

This item appears in the following Collection(s):

  • Zentrale Einrichtungen
    Frei zugängliche Publikationen aus Zentralen Einrichtungen der Leibniz Universität Hannover

Show simple item record

 

Search the repository


Browse

My Account

Usage Statistics