Show simple item record

dc.identifier.uri http://dx.doi.org/10.15488/11525
dc.identifier.uri https://www.repo.uni-hannover.de/handle/123456789/11614
dc.contributor.advisor Funke, Thorben
dc.contributor.advisor Anand, Avishek
dc.contributor.author Nauen, Tobias Christian eng
dc.date.accessioned 2021-11-19T10:24:31Z
dc.date.available 2021-11-19T10:24:31Z
dc.date.issued 2021
dc.identifier.citation Nauen, Tobias Christian: Explaining Graph Neural Networks. Hannover : Gottfried Wilhelm Leibniz Universität, Bachelor Thesis, 2021, V, 42 S. DOI: https://doi.org/10.15488/11525 eng
dc.description.abstract Graph Neural Networks are an up-and-coming class of neural networks that operate on graphs and can therefore deal with connected, highly complex data. As explaining neural networks becomes more and more important, we investigate different ways to explain graph neural networks and contrast gradient-based explanations with the interpretability by design approach KEdge. We extend KEdge, to work with probability distributions different from HardKuma. Our goal is to test the performance of each method to judge which one works best under given circum- stances. For this, we extend the notion of fidelity from hard attribution weights to soft attribution weights and use the resulting metric to evaluate the explanations generated by KEdge, as well as by the gradient-based techniques. We also compare the predictive performance of models that use KEdge with different distributions. Our experiments are run on the Cora, SightSeer, Pubmed, and MUTAG datasets. We find that KEdge outperforms the gradient based attribution techniques on graph classification problems and that it should be used with the HardNormal, HardKuma, or HardLaplace distributions, depending on if the top priority is model performance or attribution quality. To compare different metrics of judging attributions in the text domain, we visualize attribution weights generated by different models and find, that metrics which compare model attributions to human explanations lead to bad attribution weights. eng
dc.language.iso eng eng
dc.publisher Hannover : Gottfried Wilhelm Leibniz Universität
dc.rights CC BY 3.0 DE eng
dc.rights.uri http://creativecommons.org/licenses/by/3.0/de/ eng
dc.subject Attribution eng
dc.subject Graph Neural Networks eng
dc.subject GNN eng
dc.subject.ddc 004 | Informatik eng
dc.title Explaining Graph Neural Networks eng
dc.type BachelorThesis eng
dc.type Text eng
dcterms.extent V, 42 S.
dc.description.version publishedVersion eng
tib.accessRights frei zug�nglich eng


Files in this item

This item appears in the following Collection(s):

Show simple item record

 

Search the repository


Browse

My Account

Usage Statistics