垂直UTによる欠陥検出を例にした機械学習結果の解釈技術の調査
概要
A machine learning model is required to be interpretable for making its decision accepted by people. SHAP (Shapley Additive exPlanations) is a tool to show how much each of the features used for training a machine learning model contributes its decision. This paper shows an application of SHAP on a deep learning model for defect detection by normal beam UT as an example of nondestructive inspection problems.