Comparing Convolutional Neural Network in a Rashomon Set using Explanations

Downloadstatistik des Dokuments (Auswertung nach COUNTER):

Piremkumar, Prathep: Comparing Convolutional Neural Network in a Rashomon Set using Explanations. Hannover : Gottfried Wilhelm Leibniz Universität, Master Thesis, 2022, 55 S. DOI: http://doi.org/10.15488/11895

Zeitraum, für den die Download-Zahlen angezeigt werden:

Jahr: 
Monat: 

Summe der Downloads: 213




Kleine Vorschau
Zusammenfassung: 
Deep learning neural networks achieve great performance in image classification tasks. To measure the performance, a validation set is used to estimate the performance on unseen test data. A set of different models with similar performance on the validation set is called Rashomon set. Even though the performance of the validation set are similar the reasoning behind the decision may differ. Unfortunately, deep neural networks are a black box models, where the reasoning behind a decision is not clear. In this thesis we compare these black box models and aim to differentiate models which are right for the right reasons and models which are right for the wrong reasons.We examine whether different reasons between models may be found by using extremal perturbations masks, which highlight the most important part behind a models predictions. We compare the similarity of masks from different models on the same instance. We find that images with decoy can be found using this method if we compare a decoy model with non-decoy models. However, some images without decoys have similar properties as images with decoys.Another method explored in this thesis is by using influential instance. Influential instances are training instances which are important behind a decision for a validation instance. By comparing these influential instances across models we want to show the difference behind the reasoning behind a decision. Similar to the previous approach, images with decoys can be detected, but images without a decoy can have similar properties as images with decoys.We conclude that in certain scenarios explanations are useful to differentiate models that are right for the right reasons from models that are right for the wrong reasons.
Lizenzbestimmungen: CC BY 3.0 DE
Publikationstyp: MasterThesis
Publikationsstatus: publishedVersion
Erstveröffentlichung: 2022-03-01
Die Publikation erscheint in Sammlung(en):Fakultät für Elektrotechnik und Informatik

Verteilung der Downloads über den gewählten Zeitraum:

Herkunft der Downloads nach Ländern:

Pos. Land Downloads
Anzahl Proz.
1 image of flag of Germany Germany 144 67,61%
2 image of flag of United States United States 32 15,02%
3 image of flag of India India 7 3,29%
4 image of flag of Spain Spain 5 2,35%
5 image of flag of Canada Canada 5 2,35%
6 image of flag of China China 3 1,41%
7 image of flag of Austria Austria 3 1,41%
8 image of flag of Vietnam Vietnam 2 0,94%
9 image of flag of Japan Japan 2 0,94%
10 image of flag of United Kingdom United Kingdom 2 0,94%
    andere 8 3,76%

Weitere Download-Zahlen und Ranglisten:


Hinweis

Zur Erhebung der Downloadstatistiken kommen entsprechend dem „COUNTER Code of Practice for e-Resources“ international anerkannte Regeln und Normen zur Anwendung. COUNTER ist eine internationale Non-Profit-Organisation, in der Bibliotheksverbände, Datenbankanbieter und Verlage gemeinsam an Standards zur Erhebung, Speicherung und Verarbeitung von Nutzungsdaten elektronischer Ressourcen arbeiten, welche so Objektivität und Vergleichbarkeit gewährleisten sollen. Es werden hierbei ausschließlich Zugriffe auf die entsprechenden Volltexte ausgewertet, keine Aufrufe der Website an sich.