Cross-domain multi-Task learning for sequential sentence classification in research papers

Downloadstatistik des Dokuments (Auswertung nach COUNTER):

Brack, A.; Hoppe, A.; Buschermöhle, P.; Ewerth, R.: Cross-domain multi-Task learning for sequential sentence classification in research papers. In: Proceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries. New York, NY : Association for Computing Machinery, 2022, 34. DOI: https://doi.org/10.1145/3529372.3530922

Version im Repositorium

Zum Zitieren der Version im Repositorium verwenden Sie bitte diesen DOI: https://doi.org/10.15488/17060

Zeitraum, für den die Download-Zahlen angezeigt werden:

Jahr: 
Monat: 

Summe der Downloads: 2




Kleine Vorschau
Zusammenfassung: 
Sequential sentence classification deals with the categorisation of sentences based on their content and context. Applied to scientific texts, it enables the automatic structuring of research papers and the improvement of academic search engines. However, previous work has not investigated the potential of transfer learning for sentence classification across different scientific domains and the issue of different text structure of full papers and abstracts. In this paper, we derive seven related research questions and present several contributions to address them: First, we suggest a novel uniform deep learning architecture and multi-Task learning for cross-domain sequential sentence classification in scientific texts. Second, we tailor two common transfer learning methods, sequential transfer learning and multi-Task learning, to deal with the challenges of the given task. Semantic relatedness of tasks is a prerequisite for successful transfer learning of neural models. Consequently, our third contribution is an approach to semi-Automatically identify semantically related classes from different annotation schemes and we present an analysis of four annotation schemes. Comprehensive experimental results indicate that models, which are trained on datasets from different scientific domains, benefit from one another when using the proposed multi-Task learning architecture. We also report comparisons with several state-of-The-Art approaches. Our approach outperforms the state of the art on full paper datasets significantly while being on par for datasets consisting of abstracts.
Lizenzbestimmungen: CC BY 4.0 Unported
Publikationstyp: BookPart
Publikationsstatus: publishedVersion
Erstveröffentlichung: 2022
Die Publikation erscheint in Sammlung(en):Zentrale Einrichtungen

Verteilung der Downloads über den gewählten Zeitraum:

Herkunft der Downloads nach Ländern:

Pos. Land Downloads
Anzahl Proz.
1 image of flag of United States United States 1 50,00%
2 image of flag of China China 1 50,00%

Weitere Download-Zahlen und Ranglisten:


Hinweis

Zur Erhebung der Downloadstatistiken kommen entsprechend dem „COUNTER Code of Practice for e-Resources“ international anerkannte Regeln und Normen zur Anwendung. COUNTER ist eine internationale Non-Profit-Organisation, in der Bibliotheksverbände, Datenbankanbieter und Verlage gemeinsam an Standards zur Erhebung, Speicherung und Verarbeitung von Nutzungsdaten elektronischer Ressourcen arbeiten, welche so Objektivität und Vergleichbarkeit gewährleisten sollen. Es werden hierbei ausschließlich Zugriffe auf die entsprechenden Volltexte ausgewertet, keine Aufrufe der Website an sich.