How to compute the optimal way to package Amazon products

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Amazon has a range of ways to package product shipments: bags, padded mailers, T-folder boxes (the classic Amazon book box), carton boxes, and so on. The best package for a given product will optimize the trade-off between shipping costs (more elaborate packaging costs more) and the costs associated with the return of damaged products.\n\nAt this year’s European Conference on Machine Learning, my colleagues and I [presented](https://www.amazon.science/publications/think-out-of-the-package-recommending-package-types-for-e-commerce-shipments) a new model for determining the best way to package a given product. The model has been applied to hundreds of thousands of Amazon packages, reducing shipment damage by 24% while actually cutting shipping costs by 5%.\n\nThe problem we address has certain structural features that make standard machine learning methods impractical.\n\nThe first is a lack of ground-truth data. If we had abundant examples of how the same product fared — what the damage rates were — with each of the eight package types we consider, we could train a standard, supervised machine learning model to decide a package type on the basis of product features.\n\nBut we don’t have those examples: most products are shipped in only one or two package types, and damage is rare.\n\nThe other feature is that we want our model to output probabilities that respect the ordinality of the package types. That is, we’d like the model to predict higher probabilities of damage for less expensive (less robust) packaging options and lower probabilities of damage for more-expensive (more-robust) options. Enforcing ordinality is not something that standard machine learning techniques do naturally.\n\n![image.png](https://dev-media.amazoncloud.cn/7aa34ee8f2864520b1bf4fdead99520b_image.png)\n\nThe researchers' new model respects the ordinality of package types, predicting higher probabilities of damage for less expensive (less robust) packaging options and lower probabilities of damage for more-expensive (more-robust) options.\n\nCREDIT: GLYNIS CONDON\n\nSo instead, we use a simple linear model, with carefully designed constraints on the model parameters to impose ordinality. The model performs a set of arithmetic operations on vectors representing product features, yielding a score that indicates probability of damage for each combination of product and package type. The product features include things like product title, category, subcategory, dimensions, weight, the difference between the package volume and the product volume, and whether or not the product is fragile or liquid or involves hazardous materials.\n\nTo further enforce ordinality, we use what in machine learning parlance is called data augmentation. For every example of a product-package pair that resulted in product damage, we add examples that pair the same product with each of the less-robust packaging options, also labeled as resulting in damage. \n\nSimilarly, for each example of a product-package pair that was successfully delivered, we add examples that pair the same product with each of the more-robust packaging options, also labeled as having resulted in successful delivery.\n\nNot only does using a linear model make it easier to enforce ordinality, but it also makes model building much more efficient — a crucial concern, given that we’re fitting our model to 100 million different product-package pairs.\n\n#### **Formulating the problem**\n\nOur goal is to find a function that maps product features to package types in a way that minimizes the sum of shipping costs and damage-related costs for each product. (Using features rather than product identifiers as inputs to the function ensures that our model will also apply to products added after the model is trained.) At the same time, the function needs to keep the cumulative damage cost across products below some predetermined threshold.\n\nUnfortunately, this formulation of the problem is NP-complete, meaning that it’s computationally intractable in most real-world scenarios.\n\nWe prove, however, that given a set of realistic assumptions, this formulation is equivalent to a simpler optimization problem that minimizes a weighted sum of the total cost of packaging and the total damage cost. \n\nProving equivalence requires that we find the right weighting parameter — the parameter by which the damage cost is multiplied in the weighed sum. (Weighting the damage cost, rather than the cost of packaging, makes sense, as there are fewer examples of damaged goods in the data set.) In the paper, we show that the weight can be computed efficiently using binary search.\n\nThe search procedure is to start with some large weight — in our experiments, we started with 1,000 — and cut it in half. This defines the midpoint between the minimum value of the weight (0) and the maximum.\n\nNext, solve the optimization problem (minimize the weighted sum) at that weight. If, using the resulting model, the damage cost is above the predetermined threshold of the exact optimization problem, reset the minimum weight as the current midpoint; if it’s below the threshold, reset the maximum weight as the current midpoint. Then repeat the operation.\n\nAt each iteration, this procedure halves the interval in which we search for the weight setting that will keep the damage cost just under the target threshold. The procedure ends when the damage cost is within some predetermined, short distance of the threshold.\n\nIn our experiments, the procedure required 19 iterations, which meant recomputing the optimal model 19 times. But the procedure scales linearly with the data, as our proof enables us to exploit the fact that the problem constraints don’t apply across products. Hence we can decouple the optimizations for different products. So even with 100 million data points, this isn’t an overwhelming computational burden.\n\nABOUT THE AUTHOR\n#### **[Karthik S. Gurumoorthy](https://www.amazon.science/author/karthik-s-gurumoorthy)**\n\nKarthik Gurumoorthy is a senior machine learning scientist with the Amazon Machine Learning group in India.\n\n#### **[Vineet Chaoji](https://www.amazon.science/author/vineet-chaoji)**\n\nVineet Chaoji is a senior applied-science manager with the Amazon Machine Learning group in India.\n","render":"<p>Amazon has a range of ways to package product shipments: bags, padded mailers, T-folder boxes (the classic Amazon book box), carton boxes, and so on. The best package for a given product will optimize the trade-off between shipping costs (more elaborate packaging costs more) and the costs associated with the return of damaged products.</p>\n<p>At this year’s European Conference on Machine Learning, my colleagues and I <a href=\\"https://www.amazon.science/publications/think-out-of-the-package-recommending-package-types-for-e-commerce-shipments\\" target=\\"_blank\\">presented</a> a new model for determining the best way to package a given product. The model has been applied to hundreds of thousands of Amazon packages, reducing shipment damage by 24% while actually cutting shipping costs by 5%.</p>\\n<p>The problem we address has certain structural features that make standard machine learning methods impractical.</p>\n<p>The first is a lack of ground-truth data. If we had abundant examples of how the same product fared — what the damage rates were — with each of the eight package types we consider, we could train a standard, supervised machine learning model to decide a package type on the basis of product features.</p>\n<p>But we don’t have those examples: most products are shipped in only one or two package types, and damage is rare.</p>\n<p>The other feature is that we want our model to output probabilities that respect the ordinality of the package types. That is, we’d like the model to predict higher probabilities of damage for less expensive (less robust) packaging options and lower probabilities of damage for more-expensive (more-robust) options. Enforcing ordinality is not something that standard machine learning techniques do naturally.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/7aa34ee8f2864520b1bf4fdead99520b_image.png\\" alt=\\"image.png\\" /></p>\n<p>The researchers’ new model respects the ordinality of package types, predicting higher probabilities of damage for less expensive (less robust) packaging options and lower probabilities of damage for more-expensive (more-robust) options.</p>\n<p>CREDIT: GLYNIS CONDON</p>\n<p>So instead, we use a simple linear model, with carefully designed constraints on the model parameters to impose ordinality. The model performs a set of arithmetic operations on vectors representing product features, yielding a score that indicates probability of damage for each combination of product and package type. The product features include things like product title, category, subcategory, dimensions, weight, the difference between the package volume and the product volume, and whether or not the product is fragile or liquid or involves hazardous materials.</p>\n<p>To further enforce ordinality, we use what in machine learning parlance is called data augmentation. For every example of a product-package pair that resulted in product damage, we add examples that pair the same product with each of the less-robust packaging options, also labeled as resulting in damage.</p>\n<p>Similarly, for each example of a product-package pair that was successfully delivered, we add examples that pair the same product with each of the more-robust packaging options, also labeled as having resulted in successful delivery.</p>\n<p>Not only does using a linear model make it easier to enforce ordinality, but it also makes model building much more efficient — a crucial concern, given that we’re fitting our model to 100 million different product-package pairs.</p>\n<h4><a id=\\"Formulating_the_problem_26\\"></a><strong>Formulating the problem</strong></h4>\\n<p>Our goal is to find a function that maps product features to package types in a way that minimizes the sum of shipping costs and damage-related costs for each product. (Using features rather than product identifiers as inputs to the function ensures that our model will also apply to products added after the model is trained.) At the same time, the function needs to keep the cumulative damage cost across products below some predetermined threshold.</p>\n<p>Unfortunately, this formulation of the problem is NP-complete, meaning that it’s computationally intractable in most real-world scenarios.</p>\n<p>We prove, however, that given a set of realistic assumptions, this formulation is equivalent to a simpler optimization problem that minimizes a weighted sum of the total cost of packaging and the total damage cost.</p>\n<p>Proving equivalence requires that we find the right weighting parameter — the parameter by which the damage cost is multiplied in the weighed sum. (Weighting the damage cost, rather than the cost of packaging, makes sense, as there are fewer examples of damaged goods in the data set.) In the paper, we show that the weight can be computed efficiently using binary search.</p>\n<p>The search procedure is to start with some large weight — in our experiments, we started with 1,000 — and cut it in half. This defines the midpoint between the minimum value of the weight (0) and the maximum.</p>\n<p>Next, solve the optimization problem (minimize the weighted sum) at that weight. If, using the resulting model, the damage cost is above the predetermined threshold of the exact optimization problem, reset the minimum weight as the current midpoint; if it’s below the threshold, reset the maximum weight as the current midpoint. Then repeat the operation.</p>\n<p>At each iteration, this procedure halves the interval in which we search for the weight setting that will keep the damage cost just under the target threshold. The procedure ends when the damage cost is within some predetermined, short distance of the threshold.</p>\n<p>In our experiments, the procedure required 19 iterations, which meant recomputing the optimal model 19 times. But the procedure scales linearly with the data, as our proof enables us to exploit the fact that the problem constraints don’t apply across products. Hence we can decouple the optimizations for different products. So even with 100 million data points, this isn’t an overwhelming computational burden.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Karthik_S_Gurumoorthyhttpswwwamazonscienceauthorkarthiksgurumoorthy_45\\"></a><strong><a href=\\"https://www.amazon.science/author/karthik-s-gurumoorthy\\" target=\\"_blank\\">Karthik S. Gurumoorthy</a></strong></h4>\n<p>Karthik Gurumoorthy is a senior machine learning scientist with the Amazon Machine Learning group in India.</p>\n<h4><a id=\\"Vineet_Chaojihttpswwwamazonscienceauthorvineetchaoji_49\\"></a><strong><a href=\\"https://www.amazon.science/author/vineet-chaoji\\" target=\\"_blank\\">Vineet Chaoji</a></strong></h4>\n<p>Vineet Chaoji is a senior applied-science manager with the Amazon Machine Learning group in India.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭