Improving explainable AI’s explanations

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Explainability is an important research topic in AI today. If we’re going to trust deep-learning systems to make decisions for us, we often want to know why they make the decisions they do.\n\nOne popular approach to explainable AI is concept-based explanation. Instead of simply learning to predict labels from input features, the model learns to assign values to a large array of concepts. For instance, if the inputs are images of birds, the concepts might be things like bill shape, breast color, and wing pattern. Then, on the basis of the concept values, the model classifies the input: say, a yellow grosbeak.\n\nBut this approach can run into trouble if there are confounders in the training data. For instance, if birds with spatulate bills are consistently photographed on the water, the model could learn to associate water imagery with the concept “bill shape: spatulate”. And that could produce nonsensical results in the case of, say, a starling that happened to be photographed near a lake.\n\n#### **More ICLR-related content**\n\nToday, as part of our [ICLR](https://www.amazon.science/conferences-and-events/iclr-2021) coverage, Amazon Science also features [a profile of Michael Bronstein](https://www.amazon.science/research-awards/success-stories/michael-bronstein-aims-to-unite-the-deep-learning-community), a professor of computer science at Imperial College London who received an Amazon Research Award for work that pushes the boundaries of drug design, reveals the cancer-fighting properties of food — and even decodes whale-speak.\n\nIn a [paper](https://www.amazon.science/publications/debiasing-concept-based-explanations-with-causal-analysis) that Amazon distinguished scientist David Heckerman and I are presenting this week at the International Conference on Learning Representations (ICLR), we adapt a technique for removing confounders from causal models, called instrumental-variable analysis, to the problem of concept-based explanation.\n\nIn tests on a benchmark dataset of images annotated with concept labels, we show that our method increases the classification accuracy of a concept-based explanatory model by an average of 25%. Using the remove-and-retrain (ROAR) methodology, we also show that our method improves the model’s ability to identify concepts relevant to the correct image label.\n\n![image.png](https://dev-media.amazoncloud.cn/1297f2d0e07d456db66e938e7eb60e8d_image.png)\n\nA simple (too simple) causal graph of a concept-based explanatory model.\n\nOur analysis begins with a causal graph, which encodes our prior belief about the causal relationships among the variables. In our case, the belief is that a prediction target (y) causes a concept representation (c), which in turn causes an input (x). Note that prediction happens in the opposite direction, but this doesn’t matter as the statistical relationships between data and concept and concept and label remain the same.\n\nConfounders complicate this simple model. In the figure below, u is a confounder, which influences both the input and the concept (c) learned by the model; d is the debiased concept we wish to learn. \n\n![image.png](https://dev-media.amazoncloud.cn/513427e09f324b3aaf60361a69b1eb2e_image.png)\n\nA more realistic causal graph of a concept-based explanatory model, with a confounding variable (u) and a debiased concept variable (d).\n\nIn the terms of our example, u represents the watery backgrounds common to images of birds with spatulate bills, c is the confounded concept of bill shape, and d is a debiased concept of bill shape, which correlates with actual visual features of birds’ bills.\n\nNote, too, that there is a second causal path between input and label, which bypasses concept representation. The experts who label images of birds, for instance, may rely on image features not captured by the list of concepts.\n\nOur approach uses a trick from classic instrumental-variable analysis, which considers the case in which a variable p has a causal effect on the variable q, but that effect is obscured by a confounding variable, u, which influences both p and q. The analysis posits an instrumental variable, z, which is correlated with p but not with q. Instrumental-variable analysis uses regression to estimate p from z; since z is independent of the confounder u, so is the estimate of p, known as p̂. A regression of q on p̂ is thus an estimate of the causal impact of p on q.\n\n![image.png](https://dev-media.amazoncloud.cn/5d83449223ea47b7936314595859b3c7_image.png)\n\nOur updated causal model, in which we use regression to estimate concepts (ĉ) from labels (y).\n\nIn our causal graph above, we can use regression to estimate d from y and c from d, breaking the causal link between u and the estimate of c, ĉ. (In practice, we just set the estimate of c equal to the estimate of d.)\n\nUsing a benchmark dataset that contains 11,788 images of 200 types of birds, annotated according to 312 concepts, we trained two concept-based explanatory models, which were identical except that one used regression to estimate concepts and one didn’t. The model that used regression was 25% more accurate than the one that didn’t.\n\nThe accuracy of the classifier, however, doesn’t tell us anything about the accuracy of the concept identification, which is the other purpose of the model. To evaluate that, we used the ROAR method. First, we train both models using all 312 concepts for each training example. Then we discard the least relevant 31 concepts (10%) for each training example and re-train the models. Then we discard the next least relevant 31 concepts per example and re-train, and so on.\n\n![image.png](https://dev-media.amazoncloud.cn/34a254e306ae42109e4a030237d977af_image.png)\n\nOur debiased model (red) exhibits greater relative accuracy improvements than baseline (blue) as we successively remove more and more irrelevant concepts from the training data, indicating that it does a better job of identifying relevant concepts.\n\nWe find that, as irrelevant concepts are discarded, our debiased model exhibits greater relative improvement in accuracy than the baseline model. This indicates that our model is doing a better job than baseline of identifying relevant concepts.\n\nABOUT THE AUTHOR\n#### **[Mohammad Taha Bahadori](https://www.amazon.science/author/mohammed-taha-bahadori)**\nTaha Bahadori is a senior machine learning scientist in the Amazon Devices organization.","render":"<p>Explainability is an important research topic in AI today. If we’re going to trust deep-learning systems to make decisions for us, we often want to know why they make the decisions they do.</p>\n<p>One popular approach to explainable AI is concept-based explanation. Instead of simply learning to predict labels from input features, the model learns to assign values to a large array of concepts. For instance, if the inputs are images of birds, the concepts might be things like bill shape, breast color, and wing pattern. Then, on the basis of the concept values, the model classifies the input: say, a yellow grosbeak.</p>\n<p>But this approach can run into trouble if there are confounders in the training data. For instance, if birds with spatulate bills are consistently photographed on the water, the model could learn to associate water imagery with the concept “bill shape: spatulate”. And that could produce nonsensical results in the case of, say, a starling that happened to be photographed near a lake.</p>\n<h4><a id=\"More_ICLRrelated_content_6\"></a><strong>More ICLR-related content</strong></h4>\n<p>Today, as part of our <a href=\"https://www.amazon.science/conferences-and-events/iclr-2021\" target=\"_blank\">ICLR</a> coverage, Amazon Science also features <a href=\"https://www.amazon.science/research-awards/success-stories/michael-bronstein-aims-to-unite-the-deep-learning-community\" target=\"_blank\">a profile of Michael Bronstein</a>, a professor of computer science at Imperial College London who received an Amazon Research Award for work that pushes the boundaries of drug design, reveals the cancer-fighting properties of food — and even decodes whale-speak.</p>\n<p>In a <a href=\"https://www.amazon.science/publications/debiasing-concept-based-explanations-with-causal-analysis\" target=\"_blank\">paper</a> that Amazon distinguished scientist David Heckerman and I are presenting this week at the International Conference on Learning Representations (ICLR), we adapt a technique for removing confounders from causal models, called instrumental-variable analysis, to the problem of concept-based explanation.</p>\n<p>In tests on a benchmark dataset of images annotated with concept labels, we show that our method increases the classification accuracy of a concept-based explanatory model by an average of 25%. Using the remove-and-retrain (ROAR) methodology, we also show that our method improves the model’s ability to identify concepts relevant to the correct image label.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/1297f2d0e07d456db66e938e7eb60e8d_image.png\" alt=\"image.png\" /></p>\n<p>A simple (too simple) causal graph of a concept-based explanatory model.</p>\n<p>Our analysis begins with a causal graph, which encodes our prior belief about the causal relationships among the variables. In our case, the belief is that a prediction target (y) causes a concept representation ©, which in turn causes an input (x). Note that prediction happens in the opposite direction, but this doesn’t matter as the statistical relationships between data and concept and concept and label remain the same.</p>\n<p>Confounders complicate this simple model. In the figure below, u is a confounder, which influences both the input and the concept © learned by the model; d is the debiased concept we wish to learn.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/513427e09f324b3aaf60361a69b1eb2e_image.png\" alt=\"image.png\" /></p>\n<p>A more realistic causal graph of a concept-based explanatory model, with a confounding variable (u) and a debiased concept variable (d).</p>\n<p>In the terms of our example, u represents the watery backgrounds common to images of birds with spatulate bills, c is the confounded concept of bill shape, and d is a debiased concept of bill shape, which correlates with actual visual features of birds’ bills.</p>\n<p>Note, too, that there is a second causal path between input and label, which bypasses concept representation. The experts who label images of birds, for instance, may rely on image features not captured by the list of concepts.</p>\n<p>Our approach uses a trick from classic instrumental-variable analysis, which considers the case in which a variable p has a causal effect on the variable q, but that effect is obscured by a confounding variable, u, which influences both p and q. The analysis posits an instrumental variable, z, which is correlated with p but not with q. Instrumental-variable analysis uses regression to estimate p from z; since z is independent of the confounder u, so is the estimate of p, known as p̂. A regression of q on p̂ is thus an estimate of the causal impact of p on q.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/5d83449223ea47b7936314595859b3c7_image.png\" alt=\"image.png\" /></p>\n<p>Our updated causal model, in which we use regression to estimate concepts (ĉ) from labels (y).</p>\n<p>In our causal graph above, we can use regression to estimate d from y and c from d, breaking the causal link between u and the estimate of c, ĉ. (In practice, we just set the estimate of c equal to the estimate of d.)</p>\n<p>Using a benchmark dataset that contains 11,788 images of 200 types of birds, annotated according to 312 concepts, we trained two concept-based explanatory models, which were identical except that one used regression to estimate concepts and one didn’t. The model that used regression was 25% more accurate than the one that didn’t.</p>\n<p>The accuracy of the classifier, however, doesn’t tell us anything about the accuracy of the concept identification, which is the other purpose of the model. To evaluate that, we used the ROAR method. First, we train both models using all 312 concepts for each training example. Then we discard the least relevant 31 concepts (10%) for each training example and re-train the models. Then we discard the next least relevant 31 concepts per example and re-train, and so on.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/34a254e306ae42109e4a030237d977af_image.png\" alt=\"image.png\" /></p>\n<p>Our debiased model (red) exhibits greater relative accuracy improvements than baseline (blue) as we successively remove more and more irrelevant concepts from the training data, indicating that it does a better job of identifying relevant concepts.</p>\n<p>We find that, as irrelevant concepts are discarded, our debiased model exhibits greater relative improvement in accuracy than the baseline model. This indicates that our model is doing a better job than baseline of identifying relevant concepts.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Mohammad_Taha_Bahadorihttpswwwamazonscienceauthormohammedtahabahadori_49\"></a><strong><a href=\"https://www.amazon.science/author/mohammed-taha-bahadori\" target=\"_blank\">Mohammad Taha Bahadori</a></strong></h4>\n<p>Taha Bahadori is a senior machine learning scientist in the Amazon Devices organization.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭