Cross-domain multi-Task learning for sequential sentence classification in research papers

Zur Kurzanzeige

dc.identifier.uri http://dx.doi.org/10.15488/17060
dc.identifier.uri https://www.repo.uni-hannover.de/handle/123456789/17188
dc.contributor.author Brack, Arthur
dc.contributor.author Hoppe, Anett
dc.contributor.author Buschermöhle, Pascal
dc.contributor.author Ewerth, Ralph
dc.date.accessioned 2024-04-15T12:33:05Z
dc.date.available 2024-04-15T12:33:05Z
dc.date.issued 2022
dc.identifier.citation Brack, A.; Hoppe, A.; Buschermöhle, P.; Ewerth, R.: Cross-domain multi-Task learning for sequential sentence classification in research papers. In: Proceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries. New York, NY : Association for Computing Machinery, 2022, 34. DOI: https://doi.org/10.1145/3529372.3530922
dc.description.abstract Sequential sentence classification deals with the categorisation of sentences based on their content and context. Applied to scientific texts, it enables the automatic structuring of research papers and the improvement of academic search engines. However, previous work has not investigated the potential of transfer learning for sentence classification across different scientific domains and the issue of different text structure of full papers and abstracts. In this paper, we derive seven related research questions and present several contributions to address them: First, we suggest a novel uniform deep learning architecture and multi-Task learning for cross-domain sequential sentence classification in scientific texts. Second, we tailor two common transfer learning methods, sequential transfer learning and multi-Task learning, to deal with the challenges of the given task. Semantic relatedness of tasks is a prerequisite for successful transfer learning of neural models. Consequently, our third contribution is an approach to semi-Automatically identify semantically related classes from different annotation schemes and we present an analysis of four annotation schemes. Comprehensive experimental results indicate that models, which are trained on datasets from different scientific domains, benefit from one another when using the proposed multi-Task learning architecture. We also report comparisons with several state-of-The-Art approaches. Our approach outperforms the state of the art on full paper datasets significantly while being on par for datasets consisting of abstracts. eng
dc.language.iso eng
dc.publisher New York, NY : Association for Computing Machinery
dc.relation.ispartof Proceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries
dc.rights CC BY 4.0 Unported
dc.rights.uri https://creativecommons.org/licenses/by/4.0
dc.subject Multi-Task learning eng
dc.subject Scholarly communication eng
dc.subject Sequential sentence classification eng
dc.subject Transfer learning eng
dc.subject Zone identification eng
dc.subject.classification Konferenzschrift ger
dc.subject.ddc 020 | Bibliotheks- und Informationswissenschaft
dc.title Cross-domain multi-Task learning for sequential sentence classification in research papers eng
dc.type BookPart
dc.type Text
dc.relation.isbn 978-1-4503-9345-4
dc.relation.doi https://doi.org/10.1145/3529372.3530922
dc.bibliographicCitation.firstPage 34
dc.description.version publishedVersion eng
tib.accessRights frei zug�nglich
dc.bibliographicCitation.articleNumber 34


Die Publikation erscheint in Sammlung(en):

  • Zentrale Einrichtungen
    Frei zugängliche Publikationen aus Zentralen Einrichtungen der Leibniz Universität Hannover

Zur Kurzanzeige

 

Suche im Repositorium


Durchblättern

Mein Nutzer/innenkonto

Nutzungsstatistiken