Amazon pushes the boundaries of extreme multilabel classification

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"In the past few years, we’ve published a number of papers about ++[extreme multilabel classification](https://www.amazon.science/tag/extreme-multilabel-classification)++ (XMC), or classifying input data when the number of candidate labels is huge.\n\n![image.png](https://dev-media.amazoncloud.cn/a051724dba7d477a89446b77666f8510_image.png)\n\nExample of a label-partitioning model.\n\nEarlier this year, we publicly released the code for our own XMC framework, PECOS, which makes XMC more efficient through label partitioning. With label partitioning, labels are first grouped into clusters, and a matcher model is trained to assign inputs to clusters. Then, a ranker is trained to select a single label from the designated group for a given input.\n\nAt this year’s ++[Conference on Neural Information Processing Systems (NeurIPS)](https://www.amazon.science/conferences-and-events/neurips-2021)++, we’re presenting two papers that extend the range of label-partitioning frameworks — including but not limited to PECOS — and improve their classification accuracy.\n\nIn “++[Label disentanglement in partition-based extreme multilabel classification](https://www.amazon.science/publications/label-disentanglement-in-partition-based-extreme-multilabel-classification)++”, we consider the case in which the same label belongs to multiple clusters: for instance, the label “apple” might properly belong to one cluster designating computing devices and another designating fruits. We demonstrate a method for assigning labels to multiple clusters that improves classification accuracy with a negligible effect on efficiency.\n\nIn “++[Fast multi-resolution transformer fine-tuning for extreme multi-label text classification](https://www.amazon.science/publications/fast-multi-resolution-transformer-fine-tuning-for-extreme-multi-label-text-classification)++”, we propose a new method for training Transformer-based matching models that reduces training time by 95% while actually increasing accuracy.\n\n![image.png](https://dev-media.amazoncloud.cn/6c36868b82de4fd79e22ddc722592514_image.png)\n\nTraining (left) of the Transformer-based matching model (XR-Transformer) begins with a preliminary hierarchical label tree (HLT). For each layer of the tree, we jointly train a Transformer-based encoder and a linear ranker (Ŵ). Once the Transformer-based encoder has been trained, we train new linear rankers (W), which are used at inference time (right).\n\n#### **Label disentanglement**\n\nPECOS is a flexible framework that allows variation in the implementation of XMC models, and one common approach to label clustering is to use a hierarchical tree, where the labels are first divided into a few, coarse-grained groups, then successively subdivided into finer- and finer-grained groups. The matcher is then trained to traverse the tree to find the lowest-level label cluster.\n\nTypically, the tree is constructed using some fixed measure of label similarity, such as term frequency–inverse document frequency (TD-IDF), which identifies terms that predominate in a given text relative to a larger corpus of texts. In most existing label-partitioning approaches, each label ends up in exactly one low-level cluster.\n\nIn addition to the assignment of individual labels to multiple clusters, one of the novelties of our work is that the hierarchical tree itself is learned from data in a supervised manner.\n\nTo ensure that every label is assigned to all the clusters to which it properly belongs, we could simply assign every label to every cluster. But of course, that would waste the advantage of doing label partitioning in the first place.\n\nInstead, we limit the number of clusters that a given label can be assigned to — in our experiments, we varied the limit from 1 to 6 — and then treat the cluster assignment as an optimization problem. That is, we learn which assignment of labels to which clusters maximizes the performance of the XMC model.\n\nAt the beginning of the training procedure, we create a provisional hierarchical tree using TF-IDF. Then we train a matcher on that tree. On the basis of that matcher, we then reassign labels to multiple clusters in a way that maximizes classification accuracy.\n\nThis process could be repeated as many times as necessary, but in our experiments, we found that one repetition was enough to secure most of the approach’s performance gains.\n\nIn our experiments, we compared our approach to nine earlier approaches, using six metrics across four datasets. Of the resulting 24 measurements, our approach achieved the highest score on 21, second place on two.\n\n#### **Multiresolution Transformer fine-tuning**\n\nPECOS comes with a number of built-in tools for performing all three steps in our XMC pipeline: label partitioning, matching, and ranking. At KDD 2020, we described our ++[Transformer-based approach to matching](https://www.amazon.science/blog/natural-language-processing-techniques-text-classification-with-Transformers-at-scale)++, X-Transformer, which can be used with PECOS or with other XMC approaches.\n\nThe ++[initial PECOS release](https://www.amazon.science/blog/amazon-open-sources-library-for-prediction-over-large-output-spaces)++ also came with a recursive linear matching model, XR-linear, which learns to match inputs to clusters using the same iterative strategy that we use to build hierarchical trees: first XR-linear performs a coarse-grained matching, then a finer-grained matching, and so on. The “R” in XR-linear stands for “recursive”.\n\nIn our second NeurIPS paper, we combine these two approaches to create ++[XR-Transformer](https://www.amazon.science/publications/fast-multi-resolution-transformer-fine-tuning-for-extreme-multi-label-text-classification)++, a recursive, Transformer-based matcher. On the Amazon-3M dataset, a standard benchmark in the XMC field that consists of products sorted into three million product categories (the label space), training an X-Transformer matcher takes 23 days on eight GPUs; training an XR-Transformer matcher takes only 29 hours — with a significant improvement in accuracy.\n\nTo train an XR-Transformer matcher, we begin as we did in the label disentanglement work, with a hierarchical label tree based on TF-IDF features. For each layer of the tree, we jointly train a Transformer-based encoder and a linear ranker, which uses both the Transformer model embedding and the TF-IDF features as the basis for assigning an input to a particular cluster in the next layer down the tree.\n\nOnce the Transformer-based encoder has been trained at every level of the tree, we concatenate its final label embeddings with the TF-IDF features and, on that basis, produce a new label tree. Then, for each level of the tree, we train new linear rankers with the concatenated features as inputs.\n\nWe tested our approach on six public datasets, whose output space ranged from about 4,000 labels to the three million of Amazon-3M. We compared our approach to 11 predecessors on three metrics (precision at 1, 3, and 5, or the number of relevant results among the top one, three, and five labels returned).\n\nOn the three datasets with 4,000 to 31,000 labels, XR-Transformer achieved the highest score on five of nine measures. But on the three datasets with 500,000 or more labels, it achieved the highest scores across the board, by a significant margin.\n\nABOUT THE AUTHOR\n\n#### **[Wei-Cheng Chang](https://www.amazon.science/author/wei-cheng-chang)**\n\nWei-Cheng Chang is an applied scientist at Amazon.\n\n#### **[Jiong Zhang](https://www.amazon.science/author/jiong-zhang)**\n\nJiong Zhang is an applied scientist at Amazon.\n\n\n\n\n","render":"<p>In the past few years, we’ve published a number of papers about <ins><a href=\\"https://www.amazon.science/tag/extreme-multilabel-classification\\" target=\\"_blank\\">extreme multilabel classification</a></ins> (XMC), or classifying input data when the number of candidate labels is huge.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/a051724dba7d477a89446b77666f8510_image.png\\" alt=\\"image.png\\" /></p>\n<p>Example of a label-partitioning model.</p>\n<p>Earlier this year, we publicly released the code for our own XMC framework, PECOS, which makes XMC more efficient through label partitioning. With label partitioning, labels are first grouped into clusters, and a matcher model is trained to assign inputs to clusters. Then, a ranker is trained to select a single label from the designated group for a given input.</p>\n<p>At this year’s <ins><a href=\\"https://www.amazon.science/conferences-and-events/neurips-2021\\" target=\\"_blank\\">Conference on Neural Information Processing Systems (NeurIPS)</a></ins>, we’re presenting two papers that extend the range of label-partitioning frameworks — including but not limited to PECOS — and improve their classification accuracy.</p>\n<p>In “<ins><a href=\\"https://www.amazon.science/publications/label-disentanglement-in-partition-based-extreme-multilabel-classification\\" target=\\"_blank\\">Label disentanglement in partition-based extreme multilabel classification</a></ins>”, we consider the case in which the same label belongs to multiple clusters: for instance, the label “apple” might properly belong to one cluster designating computing devices and another designating fruits. We demonstrate a method for assigning labels to multiple clusters that improves classification accuracy with a negligible effect on efficiency.</p>\n<p>In “<ins><a href=\\"https://www.amazon.science/publications/fast-multi-resolution-transformer-fine-tuning-for-extreme-multi-label-text-classification\\" target=\\"_blank\\">Fast multi-resolution transformer fine-tuning for extreme multi-label text classification</a></ins>”, we propose a new method for training Transformer-based matching models that reduces training time by 95% while actually increasing accuracy.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/6c36868b82de4fd79e22ddc722592514_image.png\\" alt=\\"image.png\\" /></p>\n<p>Training (left) of the Transformer-based matching model (XR-Transformer) begins with a preliminary hierarchical label tree (HLT). For each layer of the tree, we jointly train a Transformer-based encoder and a linear ranker (Ŵ). Once the Transformer-based encoder has been trained, we train new linear rankers (W), which are used at inference time (right).</p>\n<h4><a id=\\"Label_disentanglement_18\\"></a><strong>Label disentanglement</strong></h4>\\n<p>PECOS is a flexible framework that allows variation in the implementation of XMC models, and one common approach to label clustering is to use a hierarchical tree, where the labels are first divided into a few, coarse-grained groups, then successively subdivided into finer- and finer-grained groups. The matcher is then trained to traverse the tree to find the lowest-level label cluster.</p>\n<p>Typically, the tree is constructed using some fixed measure of label similarity, such as term frequency–inverse document frequency (TD-IDF), which identifies terms that predominate in a given text relative to a larger corpus of texts. In most existing label-partitioning approaches, each label ends up in exactly one low-level cluster.</p>\n<p>In addition to the assignment of individual labels to multiple clusters, one of the novelties of our work is that the hierarchical tree itself is learned from data in a supervised manner.</p>\n<p>To ensure that every label is assigned to all the clusters to which it properly belongs, we could simply assign every label to every cluster. But of course, that would waste the advantage of doing label partitioning in the first place.</p>\n<p>Instead, we limit the number of clusters that a given label can be assigned to — in our experiments, we varied the limit from 1 to 6 — and then treat the cluster assignment as an optimization problem. That is, we learn which assignment of labels to which clusters maximizes the performance of the XMC model.</p>\n<p>At the beginning of the training procedure, we create a provisional hierarchical tree using TF-IDF. Then we train a matcher on that tree. On the basis of that matcher, we then reassign labels to multiple clusters in a way that maximizes classification accuracy.</p>\n<p>This process could be repeated as many times as necessary, but in our experiments, we found that one repetition was enough to secure most of the approach’s performance gains.</p>\n<p>In our experiments, we compared our approach to nine earlier approaches, using six metrics across four datasets. Of the resulting 24 measurements, our approach achieved the highest score on 21, second place on two.</p>\n<h4><a id=\\"Multiresolution_Transformer_finetuning_36\\"></a><strong>Multiresolution Transformer fine-tuning</strong></h4>\\n<p>PECOS comes with a number of built-in tools for performing all three steps in our XMC pipeline: label partitioning, matching, and ranking. At KDD 2020, we described our <ins><a href=\\"https://www.amazon.science/blog/natural-language-processing-techniques-text-classification-with-Transformers-at-scale\\" target=\\"_blank\\">Transformer-based approach to matching</a></ins>, X-Transformer, which can be used with PECOS or with other XMC approaches.</p>\n<p>The <ins><a href=\\"https://www.amazon.science/blog/amazon-open-sources-library-for-prediction-over-large-output-spaces\\" target=\\"_blank\\">initial PECOS release</a></ins> also came with a recursive linear matching model, XR-linear, which learns to match inputs to clusters using the same iterative strategy that we use to build hierarchical trees: first XR-linear performs a coarse-grained matching, then a finer-grained matching, and so on. The “R” in XR-linear stands for “recursive”.</p>\n<p>In our second NeurIPS paper, we combine these two approaches to create <ins><a href=\\"https://www.amazon.science/publications/fast-multi-resolution-transformer-fine-tuning-for-extreme-multi-label-text-classification\\" target=\\"_blank\\">XR-Transformer</a></ins>, a recursive, Transformer-based matcher. On the Amazon-3M dataset, a standard benchmark in the XMC field that consists of products sorted into three million product categories (the label space), training an X-Transformer matcher takes 23 days on eight GPUs; training an XR-Transformer matcher takes only 29 hours — with a significant improvement in accuracy.</p>\n<p>To train an XR-Transformer matcher, we begin as we did in the label disentanglement work, with a hierarchical label tree based on TF-IDF features. For each layer of the tree, we jointly train a Transformer-based encoder and a linear ranker, which uses both the Transformer model embedding and the TF-IDF features as the basis for assigning an input to a particular cluster in the next layer down the tree.</p>\n<p>Once the Transformer-based encoder has been trained at every level of the tree, we concatenate its final label embeddings with the TF-IDF features and, on that basis, produce a new label tree. Then, for each level of the tree, we train new linear rankers with the concatenated features as inputs.</p>\n<p>We tested our approach on six public datasets, whose output space ranged from about 4,000 labels to the three million of Amazon-3M. We compared our approach to 11 predecessors on three metrics (precision at 1, 3, and 5, or the number of relevant results among the top one, three, and five labels returned).</p>\n<p>On the three datasets with 4,000 to 31,000 labels, XR-Transformer achieved the highest score on five of nine measures. But on the three datasets with 500,000 or more labels, it achieved the highest scores across the board, by a significant margin.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"WeiCheng_Changhttpswwwamazonscienceauthorweichengchang_54\\"></a><strong><a href=\\"https://www.amazon.science/author/wei-cheng-chang\\" target=\\"_blank\\">Wei-Cheng Chang</a></strong></h4>\n<p>Wei-Cheng Chang is an applied scientist at Amazon.</p>\n<h4><a id=\\"Jiong_Zhanghttpswwwamazonscienceauthorjiongzhang_58\\"></a><strong><a href=\\"https://www.amazon.science/author/jiong-zhang\\" target=\\"_blank\\">Jiong Zhang</a></strong></h4>\n<p>Jiong Zhang is an applied scientist at Amazon.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭