A version of the BERT language model that’s 20 times as fast

机器学习
自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"In natural-language understanding (++[NLU](https://www.amazon.science/tag/nlu)++), the Transformer-based BERT language model is king. Its high performance on multiple tasks has strongly influenced contemporary NLU research. \n\nOn the other hand, it is a relatively big and slow model, which makes it unsuitable for some applications. Multiple efforts have been made to compress the BERT architecture, but the choice of architectural parameters (the number of layers, the number of processing nodes per layer, and so on) has been somewhat arbitrary, and the resulting models are rarely much better than the original at optimizing the balance between the model’s size, speed, and error rate.\n\nA few weeks ago, we released part of the code for ++[Bort](https://github.com/alexa/bort)++, a highly optimized language model (LM) extracted from the BERT architecture through a combination of two ++[rigorous algorithmic techniques](https://arxiv.org/pdf/2010.10499.pdf)++ especially designed for neural-network compression.\n\nWe tested Bort on 23 NLU tasks, and on 20 of them, it improved on BERT’s performance — by 31% in one case — even though it’s 16% the size and about 20 times as fast.\n\nThe code doesn’t contain the full versions of the algorithms we used to extract Bort from BERT, but I will discuss them here briefly.\n\n\n#### **Principled approach**\n\n\nA solid understanding of the intrinsic difficulty of a problem allows us to design efficient and correct algorithms for solving that problem that can be trusted to perform consistently. Think of trying to sort an array of numbers by just trying out random permutations: this approach will quickly become intractable as the size of the input grows. Thankfully, sorting is a well-studied problem with fast and correct solutions and a nicely bounded runtime growth. Why can’t compressing BERT, or any network, be the same way? \n\nWe wanted to produce a version of BERT whose architectural parameters minimized its parameter size, inference speed, and error rate, and I wanted to show that these architectural parameters were optimal. I call this problem “optimal subarchitecture extraction”, or OSE. Note that this is a different compression approach than weight pruning, which heuristically removes connections from a previously trained network. \n\nAn algorithm solving OSE should also work for any input, not just BERT; a general-purpose algorithm would save a lot of time — and GPU cycles — otherwise spent on trial and error approaches.\n\n![image.png](https://dev-media.amazoncloud.cn/fe1172cd62cd48c681543512f459d4e5_image.png)\n\nComparison of weight pruning (*left, bottom*) and optimal subarchitecture extraction (*or OSE, bottom right*) for a toy source network (*top*). Weight pruning removes edges from a (usually trained) network, while OSE reparametrizes the layers. For example, the source network has two fully connected linear layers of dimensions 4x3 and 4x4 (*second and third from the left*), making its architectural parameters (2, 4, 3, 4, 4). The optimal subarchitecture ended up with two layers of dimensions 3x3 and 3x2, so the architectural parameters are (2, 3, 3, 3, 2). The parameter size was brought down from 2(4*3 + 4 + 4*4 + 4) = 72 to 2(3*3 + 3 + 3*2 + 2) = 40. \n\nCREDIT: GLYNIS CONDON\n\nOSE is a computationally hard problem, so the best we can hope for is an algorithm that returns something in the ballpark of the optimum. But even a ballpark algorithm could be impractically time consuming with a large enough input. \n\nMy research showed that there is an efficient algorithm for extracting a subarchitecture whose functions — parameter size, inference speed, and error rate — have a polynomial correlation with the architectural parameters, meaning that they’re bounded by some polynomial function of the architectural parameters. \n\nThis is easy to see for the parameter size of the network, since it boils down to a counting argument based on the number of nodes per layer, times the number of layers (*see image caption above*). The same argument can be made for the inference speed — the rate at which a network outputs a result given an input — as it is related to the parameter size. \n\nUnder some circumstances, the algorithm’s error rate has a similar correlation. Furthermore, whenever the “cost” associated with the first (call it *A*) and last layers of the network is lower than that of the middle layers (*B*), the runtime of a solution will not explode. I call all these assumptions the *AB^n^C property*, which BERT turns out to have.\n\nFor inputs having the AB^n^C property, the algorithm I designed behaves like a fully polynomial-time approximation scheme, or FPTAS. Being an FPTAS means that an approximation parameter — which defines how far short of the optimal solution you’re willing to fall — can now be given as an input to the algorithm, and the algorithm’s execution time depends polynomially on that parameter. \n\nThe more accurate you want the approximation to be, the longer the algorithm takes to execute. But at least the trade-off can be precisely specified, and the runtime of the algorithm is guaranteed to not grow too quickly.\n\nI ran the FPTAS on BERT and obtained a set of architectural parameters (Bort), and from the proofs we knew that it would be Pareto optimal. That is, it optimized the balance between inference speed, parameter size, and error rate: a faster model would necessarily be bigger or more error prone; a more accurate model would be larger or slower. Any model that broke that balance would necessarily be suboptimal.\n\n\n#### **Generalizability**\n\n\nThe true strength of BERT lies in its generalizability across tasks. BERT learns to represent words of a language as points in a shared space, where proximity in the space implies semantic similarity. That general model can then be fine-tuned on specific tasks, such as ++[question answering](https://www.amazon.science/blog/on-benchmark-data-set-question-answering-system-halves-error-rate)++ or ++[text classification](https://www.amazon.science/blog/natural-language-processing-techniques-text-classification-with-transformers-at-scale)++.\n\nTo see whether Bort generalizes as well as BERT, we needed to pre-train it and then test it on several different natural-language-understanding tasks. The FPTAS doesn’t return a trained model, and it didn’t know that our ultimate goal was fine-tuning.\n\nWhen fine-tuning, I opted to follow BERT’s approach and attach a single linear classifier to Bort. I supposed that a good LM would have no problems with this setup, but the fine-tuning process was hard, and several variations on this strategy (e.g., knowledge distillation, deeper classifiers, and hyperparameter search) failed to yield a model that reached the theoretical limits predicted by the FPTAS.\n\nIn machine learning, it is crucial to have a good selection of features to make the data representative of the task, which in turn makes the task learnable. Deep networks operate in a hierarchical fashion, which makes them fantastic at selecting features automatically. If the model is too small and the data too scarce, however, the task may not be learnable. And Bort is, by design, a small model.\n\n\n#### **The Agora algorithm**\n\n\nTo address this problem, I developed a second algorithm, called Agora, which leverages the development set for the task Bort is being fine-tuned on. In machine learning, a labeled data set is usually split into three components: the training set is used to train a model; the development (or dev) set is used to check for over-/underfitting; and the test set is used to assess the model’s generalizability.\n\nWith Agora, Bort is first fine-tuned on the training set, then applied to the data in the dev set. Agora finds the data points in the dev set that the pretrained Bort model labeled *incorrectly* and samples new points near them in a chosen representation space. Those samples are labeled by a second model, and then they’re added into the training set.\n\nIt might sound silly to add randomized points from the dev set to the training set. You might expect the fine-tuned model to overfit the dev set data, so that it doesn’t generalize well to unfamiliar inputs — which means that it didn’t actually learn.\n\nBut this is a powerful approach to situations with scarce or ill-formed data. The proof of this is non-trivial, but intuitively, this algorithm works because it “reconstructs” the input in a way that is equivalent, in a precise sense, to the distribution of the task we are modeling in the first place. I believe the best part about Agora is not its effectiveness but that it is another proof of what the great Patrick Winston ++[showed fifty years ago](http://dspace.mit.edu/bitstream/handle/1721.1/6884/AITR-231.pdf)++ : you can’t learn something that you do not already know almost completely.\n\n![image.png](https://dev-media.amazoncloud.cn/5f0141f9ec364d88a41ba4d9fb502f6d_image.png)\n\nIn this toy example, we are attempting to learn a data set that looks like a torus. However, the training set is not representative of the distribution. Agora samples from the development set, generates new points, labels them, and adds them back to the training set. Given enough rounds, it is able to generate a data set learnable by any model — even a random guesser!\n\nCREDIT: GLYNIS CONDON\n\nThe application of both algorithms to BERT yielded Bort, which has an effective (not counting the embedding layer) size of 5.5% of the original BERT architecture and a net size of 16%. The effective size is more important because the first layer, by the ABnC property, is not as expensive as the other layers when performing inference. Indeed, Bort is up to 20 times faster, on a CPU, than BERT. It is also able to be pretrained much more rapidly than usual — likely due to the FPTAS’s preferring faster-converging architectures. It also obtained improvements of up to 31%, absolute, with respect to BERT, across multiple NLU benchmarks.\n\n**Acknowledgments:** *Daniel J. Perry, my coauthor for the ++[Bort paper](https://arxiv.org/pdf/2010.10499.pdf)++.*\n\nABOUT THE AUTHOR\n\n\n#### **[Adrian de Wynter](https://www.amazon.science/author/adrian-de-wynter)**\n\n\nAdrian de Wynter is an applied scientist with Alexa AI.","render":"<p>In natural-language understanding (<ins><a href=\"https://www.amazon.science/tag/nlu\" target=\"_blank\">NLU</a></ins>), the Transformer-based BERT language model is king. Its high performance on multiple tasks has strongly influenced contemporary NLU research.</p>\n<p>On the other hand, it is a relatively big and slow model, which makes it unsuitable for some applications. Multiple efforts have been made to compress the BERT architecture, but the choice of architectural parameters (the number of layers, the number of processing nodes per layer, and so on) has been somewhat arbitrary, and the resulting models are rarely much better than the original at optimizing the balance between the model’s size, speed, and error rate.</p>\n<p>A few weeks ago, we released part of the code for <ins><a href=\"https://github.com/alexa/bort\" target=\"_blank\">Bort</a></ins>, a highly optimized language model (LM) extracted from the BERT architecture through a combination of two <ins><a href=\"https://arxiv.org/pdf/2010.10499.pdf\" target=\"_blank\">rigorous algorithmic techniques</a></ins> especially designed for neural-network compression.</p>\n<p>We tested Bort on 23 NLU tasks, and on 20 of them, it improved on BERT’s performance — by 31% in one case — even though it’s 16% the size and about 20 times as fast.</p>\n<p>The code doesn’t contain the full versions of the algorithms we used to extract Bort from BERT, but I will discuss them here briefly.</p>\n<h4><a id=\"Principled_approach_11\"></a><strong>Principled approach</strong></h4>\n<p>A solid understanding of the intrinsic difficulty of a problem allows us to design efficient and correct algorithms for solving that problem that can be trusted to perform consistently. Think of trying to sort an array of numbers by just trying out random permutations: this approach will quickly become intractable as the size of the input grows. Thankfully, sorting is a well-studied problem with fast and correct solutions and a nicely bounded runtime growth. Why can’t compressing BERT, or any network, be the same way?</p>\n<p>We wanted to produce a version of BERT whose architectural parameters minimized its parameter size, inference speed, and error rate, and I wanted to show that these architectural parameters were optimal. I call this problem “optimal subarchitecture extraction”, or OSE. Note that this is a different compression approach than weight pruning, which heuristically removes connections from a previously trained network.</p>\n<p>An algorithm solving OSE should also work for any input, not just BERT; a general-purpose algorithm would save a lot of time — and GPU cycles — otherwise spent on trial and error approaches.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/fe1172cd62cd48c681543512f459d4e5_image.png\" alt=\"image.png\" /></p>\n<p>Comparison of weight pruning (<em>left, bottom</em>) and optimal subarchitecture extraction (<em>or OSE, bottom right</em>) for a toy source network (<em>top</em>). Weight pruning removes edges from a (usually trained) network, while OSE reparametrizes the layers. For example, the source network has two fully connected linear layers of dimensions 4x3 and 4x4 (<em>second and third from the left</em>), making its architectural parameters (2, 4, 3, 4, 4). The optimal subarchitecture ended up with two layers of dimensions 3x3 and 3x2, so the architectural parameters are (2, 3, 3, 3, 2). The parameter size was brought down from 2(4<em>3 + 4 + 4</em>4 + 4) = 72 to 2(3<em>3 + 3 + 3</em>2 + 2) = 40.</p>\n<p>CREDIT: GLYNIS CONDON</p>\n<p>OSE is a computationally hard problem, so the best we can hope for is an algorithm that returns something in the ballpark of the optimum. But even a ballpark algorithm could be impractically time consuming with a large enough input.</p>\n<p>My research showed that there is an efficient algorithm for extracting a subarchitecture whose functions — parameter size, inference speed, and error rate — have a polynomial correlation with the architectural parameters, meaning that they’re bounded by some polynomial function of the architectural parameters.</p>\n<p>This is easy to see for the parameter size of the network, since it boils down to a counting argument based on the number of nodes per layer, times the number of layers (<em>see image caption above</em>). The same argument can be made for the inference speed — the rate at which a network outputs a result given an input — as it is related to the parameter size.</p>\n<p>Under some circumstances, the algorithm’s error rate has a similar correlation. Furthermore, whenever the “cost” associated with the first (call it <em>A</em>) and last layers of the network is lower than that of the middle layers (<em>B</em>), the runtime of a solution will not explode. I call all these assumptions the <em>AB<sup>n</sup>C property</em>, which BERT turns out to have.</p>\n<p>For inputs having the AB<sup>n</sup>C property, the algorithm I designed behaves like a fully polynomial-time approximation scheme, or FPTAS. Being an FPTAS means that an approximation parameter — which defines how far short of the optimal solution you’re willing to fall — can now be given as an input to the algorithm, and the algorithm’s execution time depends polynomially on that parameter.</p>\n<p>The more accurate you want the approximation to be, the longer the algorithm takes to execute. But at least the trade-off can be precisely specified, and the runtime of the algorithm is guaranteed to not grow too quickly.</p>\n<p>I ran the FPTAS on BERT and obtained a set of architectural parameters (Bort), and from the proofs we knew that it would be Pareto optimal. That is, it optimized the balance between inference speed, parameter size, and error rate: a faster model would necessarily be bigger or more error prone; a more accurate model would be larger or slower. Any model that broke that balance would necessarily be suboptimal.</p>\n<h4><a id=\"Generalizability_41\"></a><strong>Generalizability</strong></h4>\n<p>The true strength of BERT lies in its generalizability across tasks. BERT learns to represent words of a language as points in a shared space, where proximity in the space implies semantic similarity. That general model can then be fine-tuned on specific tasks, such as <ins><a href=\"https://www.amazon.science/blog/on-benchmark-data-set-question-answering-system-halves-error-rate\" target=\"_blank\">question answering</a></ins> or <ins><a href=\"https://www.amazon.science/blog/natural-language-processing-techniques-text-classification-with-transformers-at-scale\" target=\"_blank\">text classification</a></ins>.</p>\n<p>To see whether Bort generalizes as well as BERT, we needed to pre-train it and then test it on several different natural-language-understanding tasks. The FPTAS doesn’t return a trained model, and it didn’t know that our ultimate goal was fine-tuning.</p>\n<p>When fine-tuning, I opted to follow BERT’s approach and attach a single linear classifier to Bort. I supposed that a good LM would have no problems with this setup, but the fine-tuning process was hard, and several variations on this strategy (e.g., knowledge distillation, deeper classifiers, and hyperparameter search) failed to yield a model that reached the theoretical limits predicted by the FPTAS.</p>\n<p>In machine learning, it is crucial to have a good selection of features to make the data representative of the task, which in turn makes the task learnable. Deep networks operate in a hierarchical fashion, which makes them fantastic at selecting features automatically. If the model is too small and the data too scarce, however, the task may not be learnable. And Bort is, by design, a small model.</p>\n<h4><a id=\"The_Agora_algorithm_53\"></a><strong>The Agora algorithm</strong></h4>\n<p>To address this problem, I developed a second algorithm, called Agora, which leverages the development set for the task Bort is being fine-tuned on. In machine learning, a labeled data set is usually split into three components: the training set is used to train a model; the development (or dev) set is used to check for over-/underfitting; and the test set is used to assess the model’s generalizability.</p>\n<p>With Agora, Bort is first fine-tuned on the training set, then applied to the data in the dev set. Agora finds the data points in the dev set that the pretrained Bort model labeled <em>incorrectly</em> and samples new points near them in a chosen representation space. Those samples are labeled by a second model, and then they’re added into the training set.</p>\n<p>It might sound silly to add randomized points from the dev set to the training set. You might expect the fine-tuned model to overfit the dev set data, so that it doesn’t generalize well to unfamiliar inputs — which means that it didn’t actually learn.</p>\n<p>But this is a powerful approach to situations with scarce or ill-formed data. The proof of this is non-trivial, but intuitively, this algorithm works because it “reconstructs” the input in a way that is equivalent, in a precise sense, to the distribution of the task we are modeling in the first place. I believe the best part about Agora is not its effectiveness but that it is another proof of what the great Patrick Winston <ins><a href=\"http://dspace.mit.edu/bitstream/handle/1721.1/6884/AITR-231.pdf\" target=\"_blank\">showed fifty years ago</a></ins> : you can’t learn something that you do not already know almost completely.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/5f0141f9ec364d88a41ba4d9fb502f6d_image.png\" alt=\"image.png\" /></p>\n<p>In this toy example, we are attempting to learn a data set that looks like a torus. However, the training set is not representative of the distribution. Agora samples from the development set, generates new points, labels them, and adds them back to the training set. Given enough rounds, it is able to generate a data set learnable by any model — even a random guesser!</p>\n<p>CREDIT: GLYNIS CONDON</p>\n<p>The application of both algorithms to BERT yielded Bort, which has an effective (not counting the embedding layer) size of 5.5% of the original BERT architecture and a net size of 16%. The effective size is more important because the first layer, by the ABnC property, is not as expensive as the other layers when performing inference. Indeed, Bort is up to 20 times faster, on a CPU, than BERT. It is also able to be pretrained much more rapidly than usual — likely due to the FPTAS’s preferring faster-converging architectures. It also obtained improvements of up to 31%, absolute, with respect to BERT, across multiple NLU benchmarks.</p>\n<p><strong>Acknowledgments:</strong> <em>Daniel J. Perry, my coauthor for the <ins><a href=\"https://arxiv.org/pdf/2010.10499.pdf\" target=\"_blank\">Bort paper</a></ins>.</em></p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Adrian_de_Wynterhttpswwwamazonscienceauthoradriandewynter_77\"></a><strong><a href=\"https://www.amazon.science/author/adrian-de-wynter\" target=\"_blank\">Adrian de Wynter</a></strong></h4>\n<p>Adrian de Wynter is an applied scientist with Alexa AI.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭
contact-us