Amazon wins best-paper award at first AutoML conference

机器学习
决策树
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"At the first annual Conference on Automated Machine Learning (AutoML), my colleagues and I won a best-paper award for a new way to decide when to terminate Bayesian optimization, a widely used hyperparameter optimization method.\n\nHyperparameters configure machine learning models, crucially affecting their performance. The depth and number of decision trees in a decision tree model or the number and width of the layers in a neural network are examples of hyperparameters. Optimizing hyperparameters requires retraining the model multiple times with different hyperparameter configurations to identify the best one.\n\n![image.png](https://awsdevweb.s3.cn-north-1.amazonaws.com.cn/053e2057e90840148929759e80a03b15_image.png)\nAn example of Bayesian optimization, in which γ is a set of hyperparameter configurations and f-hat is an empirical estimate of the resulting model error. The gap between the green and orange lines is the estimate of the upper bound on the optimizer's regret, or the distance between the ideal hyperparameter configuration and the best configuration found by the optimization algorithm.\n\nUsually, canvassing all possible hyperparameter configurations is prohibitively time consuming, so hyperparameter optimization (HPO) algorithms are designed to search the configuration space as efficiently as possible. Still, at some point the search has to be called off, and researchers have proposed a range of stopping criteria. For instance, a naive approach consists in terminating HPO if the best found configuration remains unchanged for some subsequent evaluations.\n\nOur paper, “[Automatic termination for hyperparameter optimization](https://www.amazon.science/publications/automatic-termination-for-hyperparameter-optimization)”, suggests a new principled stopping criterion, which, in our experiments, offered a better trade-off between the time consumption of HPO and the accuracy of the resulting models. Our criterion acknowledges the fact that the generalization error, which is the true but unknown optimization objective, does not coincide with the empirical estimate optimized by HPO. As a consequence, optimizing the empirical estimate too strenuously may in fact be counterproductive.\n\n### Convergence criterion for hyperparameter optimization\nThe goal of a machine learning model is to produce good predictions for unseen data. This means that a good model will minimize some generalization error, f. Population risk, for instance, measures the expected distance between the model’s prediction for a given input and the ground truth.\n\nAn HPO algorithm is always operating on some kind of budget — a restriction on the number of configurations it can consider, the wall clock time, the margin of improvement over the best configuration found so far, or the like. The algorithm’s goal is to minimize the distance between the ideal configuration, γ*, and the best configuration it can find before the budget is spent, γt*. That distance is known as the regret, rt:\n\nrt= f(γt*) – f(γ*)\n\nThe regret quantifies the convergence of the HPO algorithm.\n\nThe quality of found configurations, however, is judged according to an empirical estimate of f, which we will denote by f-hat, or f with a circumflex accent over it (see figure, above). The empirical estimate is computed on the validation set, a subset of the model’s training data. If the validation set has a different distribution than the dataset as a whole, the empirical estimate is subject to statistical error, relative to relative to the true generalization error.\n\nOur new stopping criterion is based on the observation that the accuracy of the evaluation of a particular hyperparameter configuration depends on the statistical error of the empirical estimate, f-hat. If the statistical error is greater than the regret, there’s no point in trying to optimize the configuration further. You could still improve performance on the validation set, but given the distributional mismatch, you might actually be hurting performance on the dataset as a whole.\n\nThe obvious difficulty with this approach is that we know neither the true regret — because we don’t know the model’s performance on the ideal hyperparameter configuration — nor the statistical error, because we don’t know how the validation set’s distribution differs from the full dataset’s.\n\n### Bounding the regret and estimation of the statistical error\nThe heart of our paper deals with establishing a stopping criterion when we know neither the regret nor the statistical error. Our work applies to Bayesian optimization (BO), which is an HPO method that is sample-efficient, meaning that it requires a relatively small number of hyperparameter evaluations.\n\nFirst, we prove upper and lower bounds on the regret, based on the assumption that output values of the function relating hyperparameter configuration to performance follow a normal (Gaussian) distribution. This is, in fact, a standard assumption in HPO.\n\nThen we estimate the statistical error of the empirical estimate, based on the statistical variance we observe during cross-validation. Cross-validation is a process in which the dataset is partitioned into a fixed number of equal subsets, and each subset serves in turn as the validation set, with the remaining subsets serving as training data. Cross-validation, too, is a common procedure in HPO.\n\nOur stopping criterion, then, is that the statistical error exceeds the distance between the upper and lower bounds on the regret.\n\nWe test our approach against five baselines on two different decision tree models — XGBoost and random forest — and on a deep neural network, using two different datasets. Results vary, but on average, our approach best optimizes the trade-off between model accuracy and time spent on HPO.\n\nThe [paper](https://www.amazon.science/publications/automatic-termination-for-hyperparameter-optimization) provides the technical details and the experiments we conducted to validate the termination criterion.\n\nABOUT THE AUTHOR\n#### **[Cedric Archambeau](https://www.amazon.science/author/cedric-archambeau)**\nCédric Archambeau is a principal applied scientist with Amazon Web Services.","render":"<p>At the first annual Conference on Automated Machine Learning (AutoML), my colleagues and I won a best-paper award for a new way to decide when to terminate Bayesian optimization, a widely used hyperparameter optimization method.</p>\n<p>Hyperparameters configure machine learning models, crucially affecting their performance. The depth and number of decision trees in a decision tree model or the number and width of the layers in a neural network are examples of hyperparameters. Optimizing hyperparameters requires retraining the model multiple times with different hyperparameter configurations to identify the best one.</p>\n<p><img src=\\"https://awsdevweb.s3.cn-north-1.amazonaws.com.cn/053e2057e90840148929759e80a03b15_image.png\\" alt=\\"image.png\\" /><br />\\nAn example of Bayesian optimization, in which γ is a set of hyperparameter configurations and f-hat is an empirical estimate of the resulting model error. The gap between the green and orange lines is the estimate of the upper bound on the optimizer’s regret, or the distance between the ideal hyperparameter configuration and the best configuration found by the optimization algorithm.</p>\n<p>Usually, canvassing all possible hyperparameter configurations is prohibitively time consuming, so hyperparameter optimization (HPO) algorithms are designed to search the configuration space as efficiently as possible. Still, at some point the search has to be called off, and researchers have proposed a range of stopping criteria. For instance, a naive approach consists in terminating HPO if the best found configuration remains unchanged for some subsequent evaluations.</p>\n<p>Our paper, “<a href=\\"https://www.amazon.science/publications/automatic-termination-for-hyperparameter-optimization\\" target=\\"_blank\\">Automatic termination for hyperparameter optimization</a>”, suggests a new principled stopping criterion, which, in our experiments, offered a better trade-off between the time consumption of HPO and the accuracy of the resulting models. Our criterion acknowledges the fact that the generalization error, which is the true but unknown optimization objective, does not coincide with the empirical estimate optimized by HPO. As a consequence, optimizing the empirical estimate too strenuously may in fact be counterproductive.</p>\\n<h3><a id=\\"Convergence_criterion_for_hyperparameter_optimization_11\\"></a>Convergence criterion for hyperparameter optimization</h3>\\n<p>The goal of a machine learning model is to produce good predictions for unseen data. This means that a good model will minimize some generalization error, f. Population risk, for instance, measures the expected distance between the model’s prediction for a given input and the ground truth.</p>\n<p>An HPO algorithm is always operating on some kind of budget — a restriction on the number of configurations it can consider, the wall clock time, the margin of improvement over the best configuration found so far, or the like. The algorithm’s goal is to minimize the distance between the ideal configuration, γ*, and the best configuration it can find before the budget is spent, γt*. That distance is known as the regret, rt:</p>\n<p>rt= f(γt*) – f(γ*)</p>\n<p>The regret quantifies the convergence of the HPO algorithm.</p>\n<p>The quality of found configurations, however, is judged according to an empirical estimate of f, which we will denote by f-hat, or f with a circumflex accent over it (see figure, above). The empirical estimate is computed on the validation set, a subset of the model’s training data. If the validation set has a different distribution than the dataset as a whole, the empirical estimate is subject to statistical error, relative to relative to the true generalization error.</p>\n<p>Our new stopping criterion is based on the observation that the accuracy of the evaluation of a particular hyperparameter configuration depends on the statistical error of the empirical estimate, f-hat. If the statistical error is greater than the regret, there’s no point in trying to optimize the configuration further. You could still improve performance on the validation set, but given the distributional mismatch, you might actually be hurting performance on the dataset as a whole.</p>\n<p>The obvious difficulty with this approach is that we know neither the true regret — because we don’t know the model’s performance on the ideal hyperparameter configuration — nor the statistical error, because we don’t know how the validation set’s distribution differs from the full dataset’s.</p>\n<h3><a id=\\"Bounding_the_regret_and_estimation_of_the_statistical_error_26\\"></a>Bounding the regret and estimation of the statistical error</h3>\\n<p>The heart of our paper deals with establishing a stopping criterion when we know neither the regret nor the statistical error. Our work applies to Bayesian optimization (BO), which is an HPO method that is sample-efficient, meaning that it requires a relatively small number of hyperparameter evaluations.</p>\n<p>First, we prove upper and lower bounds on the regret, based on the assumption that output values of the function relating hyperparameter configuration to performance follow a normal (Gaussian) distribution. This is, in fact, a standard assumption in HPO.</p>\n<p>Then we estimate the statistical error of the empirical estimate, based on the statistical variance we observe during cross-validation. Cross-validation is a process in which the dataset is partitioned into a fixed number of equal subsets, and each subset serves in turn as the validation set, with the remaining subsets serving as training data. Cross-validation, too, is a common procedure in HPO.</p>\n<p>Our stopping criterion, then, is that the statistical error exceeds the distance between the upper and lower bounds on the regret.</p>\n<p>We test our approach against five baselines on two different decision tree models — XGBoost and random forest — and on a deep neural network, using two different datasets. Results vary, but on average, our approach best optimizes the trade-off between model accuracy and time spent on HPO.</p>\n<p>The <a href=\\"https://www.amazon.science/publications/automatic-termination-for-hyperparameter-optimization\\" target=\\"_blank\\">paper</a> provides the technical details and the experiments we conducted to validate the termination criterion.</p>\\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Cedric_Archambeauhttpswwwamazonscienceauthorcedricarchambeau_40\\"></a><strong><a href=\\"https://www.amazon.science/author/cedric-archambeau\\" target=\\"_blank\\">Cedric Archambeau</a></strong></h4>\n<p>Cédric Archambeau is a principal applied scientist with Amazon Web Services.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭