Improving forecasting by learning quantile functions

机器学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Learning the complete quantile function, which maps probabilities to variable values, rather than building separate models for each quantile level, enables better optimization of resource trade-offs.\n\nThe quantile function is a mathematical function that takes a quantile (a percentage of a distribution, from 0 to 1) as input and outputs the value of a variable. It can answer questions like, “If I want to guarantee that 95% of my customers receive their orders within 24 hours, how much inventory do I need to keep on hand?” As such, the quantile function is commonly used in the context of forecasting questions.\n\nIn practical cases, however, we rarely have a tidy formula for computing the quantile function. Instead, statisticians usually use regression analysis to approximate it for a single quantile level at a time. That means that if you decide you want to compute it for a different quantile, you have to build a new regression model — which, today, often means retraining a neural network.\n\nIn a pair of papers we’re presenting at this year’s International Conference on Artificial Intelligence and Statistics (AISTATS), we describe an approach to learning an approximation of the entire quantile function at once, rather than simply approximating it for each quantile level.\n\nThis means that users can query the function at different points, to optimize the trade-offs between performance criteria. For instance, it could be that lowering the guarantee of 24-hour delivery from 95% to 94% enables a much larger reduction in inventory, which might be a trade-off worth making. Or, conversely, it could be that raising the guarantee threshold — and thus increasing customer satisfaction — requires very little additional inventory.\n\nOur approach is agnostic as to the shape of the distribution underlying the quantile function. The distribution could be Gaussian (the bell curve, or normal distribution); it could be uniform; or it could be anything else. Not locking ourselves into any assumptions about distribution shape allows our approach to follow the data wherever it leads, which increases the accuracy of our approximations.\n\nIn the first of our AISTATS papers, we present an approach to learning the quantile [function in the univariate case](https://www.amazon.science/publications/learning-quantile-functions-without-quantile-crossing-for-distribution-free-time-series-forecasting), where there’s a one-to-one correspondence between probabilities and variable values. In the second paper, [we consider the multivariate case](https://www.amazon.science/publications/multivariate-quantile-function-forecaster).\n\n\n#### **The quantile function**\n\n\nAny probability distribution — say, the distribution of heights in a population — can be represented as a function, called the probability density function (PDF). The input to the function is a variable (a particular height), and the output is a positive number representing the probability of the input (the fraction of people in that population who have that height).\n\nA useful related function is the cumulative distribution function (CDF), which is the probability that the variable will take a value at or below a particular value — for instance, the fraction of the population that is 5’6” or shorter. The CDF’s values are between 0 (no one is shorter than 0’0”) and 1 (100% of the population is shorter than 500’0”).\n\nTechnically, the CDF is the integral of the PDF, so it computes the area under the probability curve up to the target point. At low input values, the probability output by the CDF can be lower than that output by the PDF. But because the CDF is cumulative, it is monotonically non-decreasing: the higher the input value, the higher the output value.\n\nIf the CDF exists, the quantile function is simply its inverse. The quantile function’s graph can be produced by flipping the CDF graph over — that is, rotating it 180 degrees around a diagonal axis that extends for the lower left to the upper right of the graph.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/9ad2ddb3696e41aab7cf94b28305afe2_%E4%B8%8B%E8%BD%BD.jpg)\n\nThe graph of a probability density function (blue line) and its associated cumulative distribution function (orange line).\n\n\n![下载.gif](https://dev-media.amazoncloud.cn/27ef4623f3b24706b6dfe3a16df1b4be_%E4%B8%8B%E8%BD%BD.gif)\n\nThe quantile function is simply the inverse of the cumulative distribution function (if it exists). Its graph can be produced by flipping the cumulative distribution function's graph over.\n\n\nLike the CDF, the quantile function is monotonically non-decreasing. That’s the fundamental observation on which our method rests.\n\n\n#### **The univariate case**\n\n\nOne of the drawbacks of the conventional approach to approximating the quantile function — estimating it only at specific points — is that it can lead to quantile crossing. That is, because each prediction is based on a different model, trained on different local data, the predicted variable value for a given probability could be lower than the value predicted for a lower probability. This violates the requirement that the quantile function be monotonically non-decreasing.\n\nTo avoid quantile crossing, our method learns a predictive model for several different input values — quantiles — at once, spaced at regular intervals between 0 and 1. The model is a neural network designed so that the prediction for each successive quantile is an incremental increase of the prediction for the preceding quantile.\n\nOnce our model has learned estimates for several anchor points that enforce the monotonicity of the quantile function, we can estimate the function through simple linear extrapolation between the anchor points (called “knots” in the literature), with nonlinear extrapolation to handle the tails of the function.\n\nWhere training data is plentiful enough to enable a denser concentration of anchor points (knots), linear extrapolation provides a more accurate approximation.\n\nTo test our method, we applied it to a toy distribution with three arbitrary peaks, to demonstrate that we don’t need to make any assumptions about distribution shape.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/41230561f3fc48e2a6ef00ad02d03377_%E4%B8%8B%E8%BD%BD.jpg)\n\nThe architecture of our quantile function estimator (the incremental quantile function, or IQF), which enforces the monotonicity of the quantile function by representing the value of each quantile as an incremental increase in the value of the previous quantile.\n\n\n![下载.jpg](https://dev-media.amazoncloud.cn/3a2df33907424b28b69ebcffcdec987b_%E4%B8%8B%E8%BD%BD.jpg)\n\nAn approximation of the quantile function that (mostly) uses linear extrapolation.\n\n\n![下载.jpg](https://dev-media.amazoncloud.cn/05ba720b298b42f49b07048708cf95dd_%E4%B8%8B%E8%BD%BD.jpg)\n\nAn approximation of the quantile function with 20 knots (anchor points).\n\n\n![下载.jpg](https://dev-media.amazoncloud.cn/a178574209534feab97e71c91e0f7daa_%E4%B8%8B%E8%BD%BD.jpg)\n\nThe true distribution (red, right), with three arbitrary peaks; our method's approximation, using five knots (center); and our method's approximation, using 20 knots (right).\n\n\n#### **The multivariate case**\n\n\nSo far, we’ve been considering the case in which our distribution applies to a single variable. But in many practical forecasting use cases, we want to consider multivariate distributions.\n\nFor instance, if a particular product uses a rare battery that doesn’t come included, a forecast of the demand for that battery will probably be correlated with the forecast of the demand for that product.\n\nSimilarly, if we want to predict demand over several different time horizons, we would expect there to be some correlation between consecutive predictions: demand shouldn’t undulate too wildly. A multivariate probability distribution over time horizons should capture that correlation better than a separate univariate prediction for each horizon.\n\nThe problem is that the notion of a multivariate quantile function is not well defined. If the CDF maps multiple variables to a single probability, when you perform that mapping in reverse, which value do you map to?\n\nThis is the problem we address in our second AISTATS paper. Again, the core observation is that the quantile function must be monotonically non-decreasing. So we define the multivariate quantile function as the derivative of a convex function.\n\nA convex function is one that tends everywhere toward a single global minimum: in two dimensions, it looks like a U-shaped curve. The derivative of a function computes the slope of its graph: again in the two-dimensional case, the slope of a convex function is negative but flattening as it approaches the global minimum, zero at the minimum, and increasingly positive on the other side. Hence, the derivative is monotonically increasing.\n\nThis two-dimensional picture generalizes readily to higher dimensions. In our paper, we describe a method for training a neural network to learn a quantile function that is the derivative of a convex function. The architecture of the network enforces convexity, and, essentially, the model learns the convex function using its derivative as a training signal.\n\nIn addition to real-world datasets, we test our approach on the problem of simultaneous prediction across multiple time horizons, using a dataset that follows a multivariate Gaussian distribution. Our experiments showed that, indeed, our approach better captures the correlations between successive time horizons than a univariate approach.\n\n\n![下载.jpg](https://dev-media.amazoncloud.cn/842e89a1ddbf477799709ace775849c1_%E4%B8%8B%E8%BD%BD.jpg)\n\nA convex function (blue) and its monotonically increasing derivative (green).\n\n\n![下载.jpg](https://dev-media.amazoncloud.cn/56be6cd50e884659a4941e5c263ed418_%E4%B8%8B%E8%BD%BD.jpg)\n\nThree self-correlation graphs that maps a time series against itself. At left is the ground truth. In the center is the forecast produced by a standard univariate quantile function, in which each time step correlates only with itself. At right is the forecast produced using our method, which better captures correlations between successive time steps.\n\n\nThis work continues a line of research at Amazon combining quantile regression and deep learning to solve forecasting problems at a massive scale. In particular, it builds upon work on the [MQ-CNN model proposed by a group of Amazon scientists in 2017](https://www.amazon.science/publications/a-multi-horizon-quantile-recurrent-forecaster), extensions of which are currently [powering Amazon’s demand forecasting system](https://www.amazon.science/latest-news/the-history-of-amazons-forecasting-algorithm). The current work is also closely related to [spline quantile function RNNs](https://www.amazon.science/publications/probabilistic-forecasting-with-spline-quantile-function-rnns), which — like the multivariate quantile forecaster — [started as an internship project](https://www.amazon.science/this-amazon-intern-published-a-paper-that-extends-the-usability-of-amazon-sagemaker-deepar-in-a-profound-way).\n\nCode for all these methods is available in the open source [GluonTS probabilistic time series modeling library](https://github.com/awslabs/gluon-ts).\n\n\n#### **Acknowledgements**\n\n\nThis work would have not been possible without the help of our awesome co-authors, whom we would like to thank for their contributions to these two papers: Kelvin Kan, [Danielle Maddix](https://www.amazon.science/author/danielle-robinson), Tim Januschowski, [Konstantinos Benidis](https://www.amazon.science/author/konstantinos-benidis), Lars Ruthotto, and [Yuyang Wang](https://www.amazon.science/author/yuyang-wang), [Jan Gasthaus.](https://www.amazon.science/author/jan-gasthaus)\n\n\nABOUT THE AUTHOR\n\n#### **[Youngsuk Park](https://www.amazon.science/author/youngsuk-park)**\n\nYoungsuk Park is an applied scientist with Amazon Web Services AI Labs.\n\n#### **[Francois-Xavier Aubet](https://www.amazon.science/author/francois-xavier-aubet)**\n\nFrançois-Xavier Aubet is an applied-machine-learning scientist in Amazon's Supply Chain Optimization Technologies group.\n\n\n\n\n","render":"<p>Learning the complete quantile function, which maps probabilities to variable values, rather than building separate models for each quantile level, enables better optimization of resource trade-offs.</p>\n<p>The quantile function is a mathematical function that takes a quantile (a percentage of a distribution, from 0 to 1) as input and outputs the value of a variable. It can answer questions like, “If I want to guarantee that 95% of my customers receive their orders within 24 hours, how much inventory do I need to keep on hand?” As such, the quantile function is commonly used in the context of forecasting questions.</p>\n<p>In practical cases, however, we rarely have a tidy formula for computing the quantile function. Instead, statisticians usually use regression analysis to approximate it for a single quantile level at a time. That means that if you decide you want to compute it for a different quantile, you have to build a new regression model — which, today, often means retraining a neural network.</p>\n<p>In a pair of papers we’re presenting at this year’s International Conference on Artificial Intelligence and Statistics (AISTATS), we describe an approach to learning an approximation of the entire quantile function at once, rather than simply approximating it for each quantile level.</p>\n<p>This means that users can query the function at different points, to optimize the trade-offs between performance criteria. For instance, it could be that lowering the guarantee of 24-hour delivery from 95% to 94% enables a much larger reduction in inventory, which might be a trade-off worth making. Or, conversely, it could be that raising the guarantee threshold — and thus increasing customer satisfaction — requires very little additional inventory.</p>\n<p>Our approach is agnostic as to the shape of the distribution underlying the quantile function. The distribution could be Gaussian (the bell curve, or normal distribution); it could be uniform; or it could be anything else. Not locking ourselves into any assumptions about distribution shape allows our approach to follow the data wherever it leads, which increases the accuracy of our approximations.</p>\n<p>In the first of our AISTATS papers, we present an approach to learning the quantile <a href=\\"https://www.amazon.science/publications/learning-quantile-functions-without-quantile-crossing-for-distribution-free-time-series-forecasting\\" target=\\"_blank\\">function in the univariate case</a>, where there’s a one-to-one correspondence between probabilities and variable values. In the second paper, <a href=\\"https://www.amazon.science/publications/multivariate-quantile-function-forecaster\\" target=\\"_blank\\">we consider the multivariate case</a>.</p>\\n<h4><a id=\\"The_quantile_function_15\\"></a><strong>The quantile function</strong></h4>\\n<p>Any probability distribution — say, the distribution of heights in a population — can be represented as a function, called the probability density function (PDF). The input to the function is a variable (a particular height), and the output is a positive number representing the probability of the input (the fraction of people in that population who have that height).</p>\n<p>A useful related function is the cumulative distribution function (CDF), which is the probability that the variable will take a value at or below a particular value — for instance, the fraction of the population that is 5’6” or shorter. The CDF’s values are between 0 (no one is shorter than 0’0”) and 1 (100% of the population is shorter than 500’0”).</p>\n<p>Technically, the CDF is the integral of the PDF, so it computes the area under the probability curve up to the target point. At low input values, the probability output by the CDF can be lower than that output by the PDF. But because the CDF is cumulative, it is monotonically non-decreasing: the higher the input value, the higher the output value.</p>\n<p>If the CDF exists, the quantile function is simply its inverse. The quantile function’s graph can be produced by flipping the CDF graph over — that is, rotating it 180 degrees around a diagonal axis that extends for the lower left to the upper right of the graph.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/9ad2ddb3696e41aab7cf94b28305afe2_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>The graph of a probability density function (blue line) and its associated cumulative distribution function (orange line).</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/27ef4623f3b24706b6dfe3a16df1b4be_%E4%B8%8B%E8%BD%BD.gif\\" alt=\\"下载.gif\\" /></p>\n<p>The quantile function is simply the inverse of the cumulative distribution function (if it exists). Its graph can be produced by flipping the cumulative distribution function’s graph over.</p>\n<p>Like the CDF, the quantile function is monotonically non-decreasing. That’s the fundamental observation on which our method rests.</p>\n<h4><a id=\\"The_univariate_case_39\\"></a><strong>The univariate case</strong></h4>\\n<p>One of the drawbacks of the conventional approach to approximating the quantile function — estimating it only at specific points — is that it can lead to quantile crossing. That is, because each prediction is based on a different model, trained on different local data, the predicted variable value for a given probability could be lower than the value predicted for a lower probability. This violates the requirement that the quantile function be monotonically non-decreasing.</p>\n<p>To avoid quantile crossing, our method learns a predictive model for several different input values — quantiles — at once, spaced at regular intervals between 0 and 1. The model is a neural network designed so that the prediction for each successive quantile is an incremental increase of the prediction for the preceding quantile.</p>\n<p>Once our model has learned estimates for several anchor points that enforce the monotonicity of the quantile function, we can estimate the function through simple linear extrapolation between the anchor points (called “knots” in the literature), with nonlinear extrapolation to handle the tails of the function.</p>\n<p>Where training data is plentiful enough to enable a denser concentration of anchor points (knots), linear extrapolation provides a more accurate approximation.</p>\n<p>To test our method, we applied it to a toy distribution with three arbitrary peaks, to demonstrate that we don’t need to make any assumptions about distribution shape.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/41230561f3fc48e2a6ef00ad02d03377_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>The architecture of our quantile function estimator (the incremental quantile function, or IQF), which enforces the monotonicity of the quantile function by representing the value of each quantile as an incremental increase in the value of the previous quantile.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/3a2df33907424b28b69ebcffcdec987b_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>An approximation of the quantile function that (mostly) uses linear extrapolation.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/05ba720b298b42f49b07048708cf95dd_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>An approximation of the quantile function with 20 knots (anchor points).</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/a178574209534feab97e71c91e0f7daa_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>The true distribution (red, right), with three arbitrary peaks; our method’s approximation, using five knots (center); and our method’s approximation, using 20 knots (right).</p>\n<h4><a id=\\"The_multivariate_case_72\\"></a><strong>The multivariate case</strong></h4>\\n<p>So far, we’ve been considering the case in which our distribution applies to a single variable. But in many practical forecasting use cases, we want to consider multivariate distributions.</p>\n<p>For instance, if a particular product uses a rare battery that doesn’t come included, a forecast of the demand for that battery will probably be correlated with the forecast of the demand for that product.</p>\n<p>Similarly, if we want to predict demand over several different time horizons, we would expect there to be some correlation between consecutive predictions: demand shouldn’t undulate too wildly. A multivariate probability distribution over time horizons should capture that correlation better than a separate univariate prediction for each horizon.</p>\n<p>The problem is that the notion of a multivariate quantile function is not well defined. If the CDF maps multiple variables to a single probability, when you perform that mapping in reverse, which value do you map to?</p>\n<p>This is the problem we address in our second AISTATS paper. Again, the core observation is that the quantile function must be monotonically non-decreasing. So we define the multivariate quantile function as the derivative of a convex function.</p>\n<p>A convex function is one that tends everywhere toward a single global minimum: in two dimensions, it looks like a U-shaped curve. The derivative of a function computes the slope of its graph: again in the two-dimensional case, the slope of a convex function is negative but flattening as it approaches the global minimum, zero at the minimum, and increasingly positive on the other side. Hence, the derivative is monotonically increasing.</p>\n<p>This two-dimensional picture generalizes readily to higher dimensions. In our paper, we describe a method for training a neural network to learn a quantile function that is the derivative of a convex function. The architecture of the network enforces convexity, and, essentially, the model learns the convex function using its derivative as a training signal.</p>\n<p>In addition to real-world datasets, we test our approach on the problem of simultaneous prediction across multiple time horizons, using a dataset that follows a multivariate Gaussian distribution. Our experiments showed that, indeed, our approach better captures the correlations between successive time horizons than a univariate approach.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/842e89a1ddbf477799709ace775849c1_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>A convex function (blue) and its monotonically increasing derivative (green).</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/56be6cd50e884659a4941e5c263ed418_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>Three self-correlation graphs that maps a time series against itself. At left is the ground truth. In the center is the forecast produced by a standard univariate quantile function, in which each time step correlates only with itself. At right is the forecast produced using our method, which better captures correlations between successive time steps.</p>\n<p>This work continues a line of research at Amazon combining quantile regression and deep learning to solve forecasting problems at a massive scale. In particular, it builds upon work on the <a href=\\"https://www.amazon.science/publications/a-multi-horizon-quantile-recurrent-forecaster\\" target=\\"_blank\\">MQ-CNN model proposed by a group of Amazon scientists in 2017</a>, extensions of which are currently <a href=\\"https://www.amazon.science/latest-news/the-history-of-amazons-forecasting-algorithm\\" target=\\"_blank\\">powering Amazon’s demand forecasting system</a>. The current work is also closely related to <a href=\\"https://www.amazon.science/publications/probabilistic-forecasting-with-spline-quantile-function-rnns\\" target=\\"_blank\\">spline quantile function RNNs</a>, which — like the multivariate quantile forecaster — <a href=\\"https://www.amazon.science/this-amazon-intern-published-a-paper-that-extends-the-usability-of-amazon-sagemaker-deepar-in-a-profound-way\\" target=\\"_blank\\">started as an internship project</a>.</p>\\n<p>Code for all these methods is available in the open source <a href=\\"https://github.com/awslabs/gluon-ts\\" target=\\"_blank\\">GluonTS probabilistic time series modeling library</a>.</p>\\n<h4><a id=\\"Acknowledgements_107\\"></a><strong>Acknowledgements</strong></h4>\\n<p>This work would have not been possible without the help of our awesome co-authors, whom we would like to thank for their contributions to these two papers: Kelvin Kan, <a href=\\"https://www.amazon.science/author/danielle-robinson\\" target=\\"_blank\\">Danielle Maddix</a>, Tim Januschowski, <a href=\\"https://www.amazon.science/author/konstantinos-benidis\\" target=\\"_blank\\">Konstantinos Benidis</a>, Lars Ruthotto, and <a href=\\"https://www.amazon.science/author/yuyang-wang\\" target=\\"_blank\\">Yuyang Wang</a>, <a href=\\"https://www.amazon.science/author/jan-gasthaus\\" target=\\"_blank\\">Jan Gasthaus.</a></p>\\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Youngsuk_Parkhttpswwwamazonscienceauthoryoungsukpark_115\\"></a><strong><a href=\\"https://www.amazon.science/author/youngsuk-park\\" target=\\"_blank\\">Youngsuk Park</a></strong></h4>\n<p>Youngsuk Park is an applied scientist with Amazon Web Services AI Labs.</p>\n<h4><a id=\\"FrancoisXavier_Aubethttpswwwamazonscienceauthorfrancoisxavieraubet_119\\"></a><strong><a href=\\"https://www.amazon.science/author/francois-xavier-aubet\\" target=\\"_blank\\">Francois-Xavier Aubet</a></strong></h4>\n<p>François-Xavier Aubet is an applied-machine-learning scientist in Amazon’s Supply Chain Optimization Technologies group.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭