How to reduce annotation when evaluating AI systems

机器学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Commercial machine learning systems are trained on examples meant to represent the real world. But the world is constantly changing, and deployed machine learning systems need to be regularly reevaluated, to ensure that their performance hasn’t declined.\n\nEvaluating a deployed AI system means manually annotating data the system has classified, to determine whether those classifications are accurate. But annotation is labor intensive, so it is desirable to minimize the number of samples required to assess the system’s performance.\n\nMany commercial machine learning systems are in fact ensembles of binary classifiers; each classifier “votes” on whether an input belongs to a particular class, and the votes are pooled to produce a final decision.\n\nIn a [paper](https://www.amazon.science/publications/estimating-precisions-for-multiple-binary-classifiers-under-limited-samples) we’re presenting at the [European Conference on Machine Learning](https://www.amazon.science/conferences-and-events/ecml-pkdd-2020), we show how to reduce the number of random samples required to evaluate ensembles of binary classifiers by exploiting overlaps between the sample sets used to evaluate the individual components.\n\nFor example, imagine that an ensemble that has three classifiers, and we need 10 samples each to evaluate the performance of the three classifiers. Evaluating the ensemble requires 40 samples — 10 each for the individual classifiers and 10 for the full ensemble. If 10 of the 40 samples were duplicates, we could make do with 30 annotations. Our paper builds on this intuition.\n\nIn an experiment using real data, our approach reduced the number of samples required to evaluate an ensemble by more than 89%, while preserving the accuracy of the evaluation.\n\nWe also ran experiments using simulated data that varied the degree of overlap between the sample sets for the individual classifiers. In those experiments, the savings averaged 33%.\n\nFinally, in the paper, we show that our sampling procedure doesn’t introduce any biases into the resulting sample sets, relative to random sampling.\n\n#### **Common ground**\n\nIntuitively, randomly chosen samples for the separate components of an ensemble would inevitably include some duplicates. Most of the samples useful for evaluating one model should thus be useful for evaluating the others. The goal is to add in just enough additional samples to be able to evaluate all the models.\n\nWe begin by choosing a sample set for the entire ensemble, which we dub the “parent”; the individual models of the ensemble are, by reference, “children”. After finding a set of samples sufficient for evaluating the parent, we expand it to include the first child, then repeat the procedure until the set of samples covers all the children.\n\nOur general approach works with any criterion for evaluating an ensemble’s performance, but in the paper, we use precision — or the percentage of true positives that the classifier correctly identifies — as a running example.\n\n![image.png](https://dev-media.amazoncloud.cn/9135c34e33524fde9dbcd90e9cc12fc4_image.png)\n\nIn this figure, the set of inputs classified as positive by the parent (right circle, AP) intersects the set of inputs classified as positive by the child (left circle, AC). The intersection (orange-shaded region) between a random sample of AP (orange curve, SP) and AC represents S+, the samples from the parent’s positive set that were also classified as positive by the child. The green-shaded region represents S-, samples from the set of inputs that were classified as positive by the child but not the parent. The sprinkled x’s represent Sremain, additional samples of the inputs classified as positive by the child, required to provide enough samples to get a highly accurate estimate of precision.\n\nWe begin with the total set of inputs that the parent has judged to belong to the target class and the total set of inputs that the child has. There’s usually considerable overlap between the two sets; for example, in a majority-vote ensemble composed of three classifiers, the ensemble (parent) classifies an input as positive as long as two of the components (children) do.\n\nFrom the parent set, we select enough random samples to evaluate the parent. Then we find the intersection between that sample set and the child’s total set of positive classifications (S+ in the figure above). This becomes our baseline sample set for the child.\n\nNext, we draw a random sampling of inputs that the child classified as positive but the parent did not (S-, above). The ratio between the size of this sample and the size of the baseline sample set should be the same as the ratio between the number of inputs that the child — but not the parent — labeled positive and the number of inputs that both labeled positive.\n\nWhen we add these samples to the baseline sample set, we get a combined sample set that may not be large enough to accurately estimate precision. If needed, we select more samples from the inputs classified as positive by the child. These samples may also have been classified as positive by the parent (Sremain in the figure above).\n\nRecall that we first selected samples from the set where the child and parent agreed, then from the set where the child and parent disagreed. That means that the sample set we have constructed is not truly random, so the next step is to mix together the samples in the combined set.\n\n#### **Reshuffle or resample?**\n\nWe experimented with two different ways of performing this mixing. In one, we simply reshuffle all the samples in the combined set. In the other, we randomly draw samples from the combined set and add them to a new mixed set, until the mixed set is the same size as the combined set. In both approaches, the end result is that when we pick any element from the sample, we won’t know whether it came from the set where the parent and child agreed or the one where they disagreed.\n\n![image.png](https://dev-media.amazoncloud.cn/2f49d163181448e498950b3482a8bf88_image.png)\n\nA visualization of the average savings in samples provided by our approach as we varied the amount of overlap between the parent’s and child’s judgments.\n\nIn our experiments, we identified a slight trade-off between the results of our algorithm when we used reshuffling to produce the mixed sample set and when we used resampling. Because resampling introduces some redundancies into the mixed set, it requires fewer samples than reshuffling, which increases the savings in sample size versus random sampling.\n\nAt the same time, however, it slightly lowers the accuracy of the precision estimate. With reshuffling, our algorithm, on average, slightly outperformed random sampling on our three test data sets, while with resampling, it was slightly less accurate than random sampling.\n\nOverall, the sampling procedure we have developed reduces the sample size. Of course, the amount of savings depends on the overlap between the parent’s and child’s judgments. The greater the overlap, the greater the savings in samples.\n\nABOUT THE AUTHOR\n\n#### **[Srinivasan Jagannathan](https://www.amazon.science/author/srinivasan-jagannathan)**\n\nSrinivasan Jagannathan is a senior manager of software development at Amazon.","render":"<p>Commercial machine learning systems are trained on examples meant to represent the real world. But the world is constantly changing, and deployed machine learning systems need to be regularly reevaluated, to ensure that their performance hasn’t declined.</p>\n<p>Evaluating a deployed AI system means manually annotating data the system has classified, to determine whether those classifications are accurate. But annotation is labor intensive, so it is desirable to minimize the number of samples required to assess the system’s performance.</p>\n<p>Many commercial machine learning systems are in fact ensembles of binary classifiers; each classifier “votes” on whether an input belongs to a particular class, and the votes are pooled to produce a final decision.</p>\n<p>In a <a href=\\"https://www.amazon.science/publications/estimating-precisions-for-multiple-binary-classifiers-under-limited-samples\\" target=\\"_blank\\">paper</a> we’re presenting at the <a href=\\"https://www.amazon.science/conferences-and-events/ecml-pkdd-2020\\" target=\\"_blank\\">European Conference on Machine Learning</a>, we show how to reduce the number of random samples required to evaluate ensembles of binary classifiers by exploiting overlaps between the sample sets used to evaluate the individual components.</p>\\n<p>For example, imagine that an ensemble that has three classifiers, and we need 10 samples each to evaluate the performance of the three classifiers. Evaluating the ensemble requires 40 samples — 10 each for the individual classifiers and 10 for the full ensemble. If 10 of the 40 samples were duplicates, we could make do with 30 annotations. Our paper builds on this intuition.</p>\n<p>In an experiment using real data, our approach reduced the number of samples required to evaluate an ensemble by more than 89%, while preserving the accuracy of the evaluation.</p>\n<p>We also ran experiments using simulated data that varied the degree of overlap between the sample sets for the individual classifiers. In those experiments, the savings averaged 33%.</p>\n<p>Finally, in the paper, we show that our sampling procedure doesn’t introduce any biases into the resulting sample sets, relative to random sampling.</p>\n<h4><a id=\\"Common_ground_16\\"></a><strong>Common ground</strong></h4>\\n<p>Intuitively, randomly chosen samples for the separate components of an ensemble would inevitably include some duplicates. Most of the samples useful for evaluating one model should thus be useful for evaluating the others. The goal is to add in just enough additional samples to be able to evaluate all the models.</p>\n<p>We begin by choosing a sample set for the entire ensemble, which we dub the “parent”; the individual models of the ensemble are, by reference, “children”. After finding a set of samples sufficient for evaluating the parent, we expand it to include the first child, then repeat the procedure until the set of samples covers all the children.</p>\n<p>Our general approach works with any criterion for evaluating an ensemble’s performance, but in the paper, we use precision — or the percentage of true positives that the classifier correctly identifies — as a running example.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/9135c34e33524fde9dbcd90e9cc12fc4_image.png\\" alt=\\"image.png\\" /></p>\n<p>In this figure, the set of inputs classified as positive by the parent (right circle, AP) intersects the set of inputs classified as positive by the child (left circle, AC). The intersection (orange-shaded region) between a random sample of AP (orange curve, SP) and AC represents S+, the samples from the parent’s positive set that were also classified as positive by the child. The green-shaded region represents S-, samples from the set of inputs that were classified as positive by the child but not the parent. The sprinkled x’s represent Sremain, additional samples of the inputs classified as positive by the child, required to provide enough samples to get a highly accurate estimate of precision.</p>\n<p>We begin with the total set of inputs that the parent has judged to belong to the target class and the total set of inputs that the child has. There’s usually considerable overlap between the two sets; for example, in a majority-vote ensemble composed of three classifiers, the ensemble (parent) classifies an input as positive as long as two of the components (children) do.</p>\n<p>From the parent set, we select enough random samples to evaluate the parent. Then we find the intersection between that sample set and the child’s total set of positive classifications (S+ in the figure above). This becomes our baseline sample set for the child.</p>\n<p>Next, we draw a random sampling of inputs that the child classified as positive but the parent did not (S-, above). The ratio between the size of this sample and the size of the baseline sample set should be the same as the ratio between the number of inputs that the child — but not the parent — labeled positive and the number of inputs that both labeled positive.</p>\n<p>When we add these samples to the baseline sample set, we get a combined sample set that may not be large enough to accurately estimate precision. If needed, we select more samples from the inputs classified as positive by the child. These samples may also have been classified as positive by the parent (Sremain in the figure above).</p>\n<p>Recall that we first selected samples from the set where the child and parent agreed, then from the set where the child and parent disagreed. That means that the sample set we have constructed is not truly random, so the next step is to mix together the samples in the combined set.</p>\n<h4><a id=\\"Reshuffle_or_resample_38\\"></a><strong>Reshuffle or resample?</strong></h4>\\n<p>We experimented with two different ways of performing this mixing. In one, we simply reshuffle all the samples in the combined set. In the other, we randomly draw samples from the combined set and add them to a new mixed set, until the mixed set is the same size as the combined set. In both approaches, the end result is that when we pick any element from the sample, we won’t know whether it came from the set where the parent and child agreed or the one where they disagreed.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/2f49d163181448e498950b3482a8bf88_image.png\\" alt=\\"image.png\\" /></p>\n<p>A visualization of the average savings in samples provided by our approach as we varied the amount of overlap between the parent’s and child’s judgments.</p>\n<p>In our experiments, we identified a slight trade-off between the results of our algorithm when we used reshuffling to produce the mixed sample set and when we used resampling. Because resampling introduces some redundancies into the mixed set, it requires fewer samples than reshuffling, which increases the savings in sample size versus random sampling.</p>\n<p>At the same time, however, it slightly lowers the accuracy of the precision estimate. With reshuffling, our algorithm, on average, slightly outperformed random sampling on our three test data sets, while with resampling, it was slightly less accurate than random sampling.</p>\n<p>Overall, the sampling procedure we have developed reduces the sample size. Of course, the amount of savings depends on the overlap between the parent’s and child’s judgments. The greater the overlap, the greater the savings in samples.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Srinivasan_Jagannathanhttpswwwamazonscienceauthorsrinivasanjagannathan_54\\"></a><strong><a href=\\"https://www.amazon.science/author/srinivasan-jagannathan\\" target=\\"_blank\\">Srinivasan Jagannathan</a></strong></h4>\n<p>Srinivasan Jagannathan is a senior manager of software development at Amazon.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭