Simplifying BERT-based models to increase efficiency, capacity

0
0
{"value":"In recent years, many of the best-performing models in the field of natural-language processing ([NLP](https://www.amazon.science/tag/nlp)) have been built on top of BERT language models. Pretrained on large corpora of (unlabeled) public texts, BERT models encode the probabilities of sequences of words. Because a BERT model begins with extensive knowledge of a language as a whole, it can be fine-tuned on a more targeted task — like question answering or machine translation — with relatively little labeled data.\n\nBERT models, however, are very large, and BERT-based NLP models can be slow — even prohibitively slow, for users with limited computational resources. Their complexity also limits the length of the inputs they can take, as their memory footprint scales with the square of the input length.\n\nAt this year’s meeting of the Association for Computational Linguistics ([ACL](https://www.amazon.science/conferences-and-events/acl-2022)), my colleagues and I [presented a new method](https://www.amazon.science/publications/pyramid-bert-reducing-complexity-via-successive-core-set-based-token-selection), called Pyramid-BERT, that reduces the training time, inference time, and memory footprint of BERT-based models, without sacrificing much accuracy. The reduced memory footprint also enables BERT models to operate on longer text sequences.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/73fc9719b2e544e9b04008e13f3bb8af_%E4%B8%8B%E8%BD%BD.jpg)\n\nA simplified illustration of the Pyramid-BERT architecture.\n\nBERT-based models take sequences of sentences as inputs and output vector representations — embeddings — of both each sentence as a whole and its constituent words individually. Downstream applications such as text classification and ranking, however, use only the complete-sentence embeddings. To make BERT-based models more efficient, we progressively eliminate redundant individual-word embeddings in intermediate layers of the network, while trying to minimize the effect on the complete-sentence embeddings.\n\nWe compare Pyramid-BERT to several state-of-the-art techniques for making BERT models more efficient and show that we can speed inference up 3- to 3.5-fold while suffering an accuracy drop of only 1.5%, whereas, at the same speeds, the best existing method loses 2.5% of its accuracy.\n\nMoreover, when we apply our method to Performers — variations on BERT models that are specifically designed for long texts — we can reduce the models’ memory footprint by 70%, while actually increasing accuracy. At that compression rate, the best existing approach suffers an accuracy dropoff of 4%.\n\n\n#### **A token’s progress**\n\n\nEach sentence input to a BERT model is broken into units called tokens. Most tokens are words, but some are multiword phrases, some are subword parts, some are individual letters of acronyms, and so on. The start of each sentence is demarcated by a special token called — for reasons that will soon be clear — CLS, for classification.\n\nEach token passes through a series of encoders — usually somewhere between four and 12 — each of which produces a new embedding for each input token. Each encoder has an attention mechanism, which decides how much each token’s embedding should reflect information carried by other tokens.\n\nFor instance, given the sentence “Bob told his brother that he was starting to get on his nerves,” the attention mechanism should pay more attention to the word “Bob” when encoding the word “his” but “brother” when encoding the word “he”. It’s because the attention mechanism must compare every word in an input sequence to every other that a BERT model’s memory footprint scales with the square of the input.\n\nAs tokens pass through the series of encoders, their embeddings factor in more and more information about other tokens in the sequence, since they’re attending to other tokens that are also factoring in more and more information. By the time the tokens pass through the final encoder, the embedding of the CLS token ends up representing the sentence as a whole (hence the CLS token’s name). But its embedding is also very similar to those of all the other tokens in the sentence. That’s the redundancy we’re trying to remove.\n\nThe basic idea is that, in each of the network’s encoders, we preserve the embedding of the CLS token but select a representative subset — a core set — of the other tokens’ embeddings.\n\nEmbeddings are vectors, so they can be interpreted as points in a multidimensional space. To construct core sets we would, ideally, sort embeddings into clusters of equal diameter and select the center point — the centroid — of each cluster.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/b45047888b404f0782aa4f146a829d25_%E4%B8%8B%E8%BD%BD.jpg)\n\nIdeally, for each encoder in the network, we would construct a representative subset of token embeddings (green dots) by selecting the centroids (red dots) of token clusters (circles). The centroids would then pass to the next layer of the network.\n\nUnfortunately, the problem of constructing a core set that spans a layer of a neural network is NP-hard, meaning that it’s impractically time consuming.\n\nAs an alternative, our paper proposes a greedy algorithm that selects n members of the core set at a time. At each layer, we take the embedding of the CLS token, and then we find the n embeddings farthest from it in the representational space. We add those, along with the CLS embedding, to our core set. Then we find the n embeddings whose minimum distance from any of the points already in our core set is greatest, and we add those to the core set.\n\nWe repeat this process until our core set reaches the desired size. This is provably an adequate approximation of the optimal core set.\n\nFinally, in our paper, we consider the question of how large the core set of each layer should be. We use an exponential-delay function to determine the degree of attenuation from one layer to the next, and we investigate the trade-offs between accuracy and speedups or memory reduction that result from selecting different rates of decay.\n\n**Acknowledgements**: [Ashish Khetan](https://www.amazon.science/author/ashish-khetan), Rene Bidart, [Zohar Karnin](https://www.amazon.science/author/zohar-karnin)\n\nABOUT THE AUTHOR\n\n#### **[Xin Huang](https://www.amazon.science/author/xin-huang)**\n\nXin Huang is an applied scientist with Amazon Web Services.\n\n","render":"<p>In recent years, many of the best-performing models in the field of natural-language processing (<a href=\\"https://www.amazon.science/tag/nlp\\" target=\\"_blank\\">NLP</a>) have been built on top of BERT language models. Pretrained on large corpora of (unlabeled) public texts, BERT models encode the probabilities of sequences of words. Because a BERT model begins with extensive knowledge of a language as a whole, it can be fine-tuned on a more targeted task — like question answering or machine translation — with relatively little labeled data.</p>\\n<p>BERT models, however, are very large, and BERT-based NLP models can be slow — even prohibitively slow, for users with limited computational resources. Their complexity also limits the length of the inputs they can take, as their memory footprint scales with the square of the input length.</p>\n<p>At this year’s meeting of the Association for Computational Linguistics (<a href=\\"https://www.amazon.science/conferences-and-events/acl-2022\\" target=\\"_blank\\">ACL</a>), my colleagues and I <a href=\\"https://www.amazon.science/publications/pyramid-bert-reducing-complexity-via-successive-core-set-based-token-selection\\" target=\\"_blank\\">presented a new method</a>, called Pyramid-BERT, that reduces the training time, inference time, and memory footprint of BERT-based models, without sacrificing much accuracy. The reduced memory footprint also enables BERT models to operate on longer text sequences.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/73fc9719b2e544e9b04008e13f3bb8af_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>A simplified illustration of the Pyramid-BERT architecture.</p>\n<p>BERT-based models take sequences of sentences as inputs and output vector representations — embeddings — of both each sentence as a whole and its constituent words individually. Downstream applications such as text classification and ranking, however, use only the complete-sentence embeddings. To make BERT-based models more efficient, we progressively eliminate redundant individual-word embeddings in intermediate layers of the network, while trying to minimize the effect on the complete-sentence embeddings.</p>\n<p>We compare Pyramid-BERT to several state-of-the-art techniques for making BERT models more efficient and show that we can speed inference up 3- to 3.5-fold while suffering an accuracy drop of only 1.5%, whereas, at the same speeds, the best existing method loses 2.5% of its accuracy.</p>\n<p>Moreover, when we apply our method to Performers — variations on BERT models that are specifically designed for long texts — we can reduce the models’ memory footprint by 70%, while actually increasing accuracy. At that compression rate, the best existing approach suffers an accuracy dropoff of 4%.</p>\n<h4><a id=\\"A_tokens_progress_17\\"></a><strong>A token’s progress</strong></h4>\\n<p>Each sentence input to a BERT model is broken into units called tokens. Most tokens are words, but some are multiword phrases, some are subword parts, some are individual letters of acronyms, and so on. The start of each sentence is demarcated by a special token called — for reasons that will soon be clear — CLS, for classification.</p>\n<p>Each token passes through a series of encoders — usually somewhere between four and 12 — each of which produces a new embedding for each input token. Each encoder has an attention mechanism, which decides how much each token’s embedding should reflect information carried by other tokens.</p>\n<p>For instance, given the sentence “Bob told his brother that he was starting to get on his nerves,” the attention mechanism should pay more attention to the word “Bob” when encoding the word “his” but “brother” when encoding the word “he”. It’s because the attention mechanism must compare every word in an input sequence to every other that a BERT model’s memory footprint scales with the square of the input.</p>\n<p>As tokens pass through the series of encoders, their embeddings factor in more and more information about other tokens in the sequence, since they’re attending to other tokens that are also factoring in more and more information. By the time the tokens pass through the final encoder, the embedding of the CLS token ends up representing the sentence as a whole (hence the CLS token’s name). But its embedding is also very similar to those of all the other tokens in the sentence. That’s the redundancy we’re trying to remove.</p>\n<p>The basic idea is that, in each of the network’s encoders, we preserve the embedding of the CLS token but select a representative subset — a core set — of the other tokens’ embeddings.</p>\n<p>Embeddings are vectors, so they can be interpreted as points in a multidimensional space. To construct core sets we would, ideally, sort embeddings into clusters of equal diameter and select the center point — the centroid — of each cluster.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/b45047888b404f0782aa4f146a829d25_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>Ideally, for each encoder in the network, we would construct a representative subset of token embeddings (green dots) by selecting the centroids (red dots) of token clusters (circles). The centroids would then pass to the next layer of the network.</p>\n<p>Unfortunately, the problem of constructing a core set that spans a layer of a neural network is NP-hard, meaning that it’s impractically time consuming.</p>\n<p>As an alternative, our paper proposes a greedy algorithm that selects n members of the core set at a time. At each layer, we take the embedding of the CLS token, and then we find the n embeddings farthest from it in the representational space. We add those, along with the CLS embedding, to our core set. Then we find the n embeddings whose minimum distance from any of the points already in our core set is greatest, and we add those to the core set.</p>\n<p>We repeat this process until our core set reaches the desired size. This is provably an adequate approximation of the optimal core set.</p>\n<p>Finally, in our paper, we consider the question of how large the core set of each layer should be. We use an exponential-delay function to determine the degree of attenuation from one layer to the next, and we investigate the trade-offs between accuracy and speedups or memory reduction that result from selecting different rates of decay.</p>\n<p><strong>Acknowledgements</strong>: <a href=\\"https://www.amazon.science/author/ashish-khetan\\" target=\\"_blank\\">Ashish Khetan</a>, Rene Bidart, <a href=\\"https://www.amazon.science/author/zohar-karnin\\" target=\\"_blank\\">Zohar Karnin</a></p>\\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Xin_Huanghttpswwwamazonscienceauthorxinhuang_48\\"></a><strong><a href=\\"https://www.amazon.science/author/xin-huang\\" target=\\"_blank\\">Xin Huang</a></strong></h4>\n<p>Xin Huang is an applied scientist with Amazon Web Services.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭