当前位置:网站首页>It looks like this because Explain the prototype of interpretable image recognition (CS CV)

It looks like this because Explain the prototype of interpretable image recognition (CS CV)

2020-12-07 19:16:42 Ling Qian

Prototype image recognition is considered as an interpretable alternative to the black box deep learning model . It depends on the image classification “ look ” The degree of archetype . However , Human perceptual similarity may be different from the similarity learned by the model . Users don't know the potential classification strategy , I don't know which image features ( for example , Color or shape ) Is the main feature of decision making . We solved this ambiguity , I think we should explain the prototype . Visual prototypes are not enough to understand what they represent , And why prototypes and images are considered similar . We improve our interpretation by automatically augmenting prototypes with additional information about visual features that the model considers important . say concretely , Our method quantifies the color tone 、 shape 、 texture 、 The effect of contrast and saturation on prototypes . We apply our method to the existing prototype part network (ProtoPNet), And it shows that our explanation clarifies the meaning of prototype , Otherwise, it may be misinterpreted . We also revealed that visually similar archetypes can have the same explanation , Indicates redundancy . Because of the generality of this method , It can improve the interpretability of any prototype image recognition method based on similarity .

Original title :This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition

original text :Image recognition with prototypes is considered an interpretable alternative for black box deep learning models. Classification depends on the extent to which a test image "looks like" a prototype. However, perceptual similarity for humans can be different from the similarity learnt by the model. A user is unaware of the underlying classification strategy and does not know which image characteristics (e.g., color or shape) is the dominant characteristic for the decision. We address this ambiguity and argue that prototypes should be explained. Only visualizing prototypes can be insufficient for understanding what a prototype exactly represents, and why a prototype and an image are considered similar. We improve interpretability by automatically enhancing prototypes with extra information about visual characteristics considered important by the model. Specifically, our method quantifies the influence of color hue, shape, texture, contrast and saturation in a prototype. We apply our method to the existing Prototypical Part Network (ProtoPNet) and show that our explanations clarify the meaning of a prototype which might have been interpreted incorrectly otherwise. We also reveal that visually similar prototypes can have the same explanations, indicating redundancy. Because of the generality of our approach, it can improve the interpretability of any similarity-based method for prototypical image recognition.

Original author :Meike Nauta, Annemarie Jutte, Jesper Provoost, Christin Seifert

Original address :https://arxiv.org/abs/2011.02863

Original statement , This article is authorized by the author + Community publication , Unauthorized , Shall not be reproduced .

If there is any infringement , Please contact the yunjia_community@tencent.com Delete .

版权声明
本文为[Ling Qian]所创,转载请带上原文链接,感谢
https://chowdera.com/2020/11/20201112182545399i.html