Improving complementary-product recommendations

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"One way that e-commerce sites make life easier for customers is by recommending products that complement whatever the customer is looking for: someone buying a tennis racket, for instance, may also want to buy tennis balls; someone buying a camera may want an SD card for extra storage.\n\nAt this year’s [Conference on Information and Knowledge Management](https://www.amazon.science/conferences-and-events/cikm-2020), my colleagues at the University of California, Los Angeles, and Amazon and I will present a [new deep-learning-based method](https://www.amazon.science/publications/p-companion-a-principled-framework-for-diversified-complementary-product-recommendation) for doing complementary-product recommendation (CPR) that, in our tests, was 7% more likely to find a product that the customer wanted to buy than existing methods.\n\nThat improvement comes from three main strategies: better selection of training data for the CPR model; greater diversity in the types of products recommended; and respect for the asymmetry of the CPR problem (while an SD card may a be a good product to complement a camera, a camera is not a good product to complement an SD card).\n\nOur approach also addresses the problem of cold start, or predicting complementary products for items that were added to the product catalogue after the machine learning model was trained. To do that, we use an embedding scheme developed at Amazon, called Product2vec, to represent the inputs to the CPR model — the products we seek to complement — according to their attributes and their relationships with other products, rather than simply using their names or ID numbers.\n\n#### **Implicit signals**\n\nFor training data, our model, like most other CPR models, relies on implicit signals from customers. We consider three ways that product x might be related to product y: co-purchase, meaning customers who purchased 𝑥 also purchased y; co-view, meaning customers who viewed x also viewed y; and purchase after view, meaning customers who viewed x eventually bought y.\n\nCPR models typically use co-views and purchase after view as an indication of similarity and co-purchase as an indication of complementarity. But there is considerable overlap between these three categories.\n\nOur intuition was that training a CPR model on product pairs that show up in the co-purchase data but not in the co-view and purchase-after-view data would lead to better predictions. \n\nUser studies in which participants rated pairs of products as substitutable, complementary, or irrelevant bore out this intuition: the complementarity ratings of co-purchase-only product pairs were 30% higher than those of co-purchase product pairs that also showed up in the co-view and purchase-after-view data. Accordingly, we used co-purchase-only product pairs to train our model.\n\nThe inputs to our model are Product2vec embedding vectors. Embeddings represent data items as points in a multidimensional space, such that proximity in the space indicates some relationship between the items. In our case, that relationship is similarity: points representing different brands of tennis rackets should cluster together in the space, as should points representing cameras, and so on.\n\n![image.png](https://dev-media.amazoncloud.cn/8ad2e31b15874106a8e5985feeb89798_image.png)\n\nIn our graphical representation of relationships between products, each node includes information such as a product’s category, type, and image, and edges represent relationships such as the co-viewing and co-purchase of products.\n\nProduct2vec on pairs of products that show up in the co-view and purchase-after-view data but not in the co-purchase data. The idea is that customers might view variations of the same product before selecting one for purchase, but co-purchased products are likely to be complementary rather than similar.\n\nIn the same way that we train our CPR model on co-purchase-only data, we train Product2vec on pairs of products that show up in the co-view and purchase-after-view data but not in the co-purchase data. The idea is that customers might view variations of the same product before selecting one for purchase, but co-purchased products are likely to be complementary rather than similar.\n\nProduct2vec embedding helps solve the cold-start problem, as it will produce a meaningful embedding even for products it hasn’t seen before.\n\n#### **Diversification**\n\nCPR models are typically trained to output the most frequent co-purchases for each input product. But this can lead to homogeneity of outputs: the top three co-purchases for a tennis racket, for instance, might be three different brands of tennis balls. We believe that customers would prefer more-diverse complementary-product recommendations: for instance, the top three recommendations for a tennis racket should be something like a can of tennis balls, a pack of overgrips, and a headband.\n\nWe enforce diversity through our model architecture. For every input product, we pass its product-type embedding through a neural network (the type transition network) that outputs the embeddings of complementary product types. Each of those embeddings is then concatenated with the embedding of the input product before passing to the module that generates the recommendations (the type-item prediction module).\n\n![image.png](https://dev-media.amazoncloud.cn/181b6e17191140d6b6db6665b2c08d59_image.png)\n\nThe architecture of our model. For each input, the type transition module outputs a set of vectors representing complementary product types. These are combined with the representation of the input product before it passes to the type-item prediction module, to ensure diversity in the model’s outputs.\n\nIn our graphical representation of relationships between products, each node includes information such as a product’s category, type, and image, and edges represent relationships such as the co-viewing and co-purchase of products.\n\nThe whole model is trained end to end: that is, during training, the type transition network is evaluated solely according to the accuracy of the type-item prediction module’s outputs. But each output of the type transition network is associated with a single output of the type-item prediction module, which naturally leads to greater type diversity among recommendations.\n\nThe addition of the type transition network also breaks the symmetry between related products that can cause problems for the typical CPR system. The typical system bases its judgments of complementarity on proximity in the embedding space. But in that space, an SD card is as close to a camera as a camera is to an SD card.\n\nThe type transition network, however, learns to output different product-type embeddings for cameras and SD cards, which enables our model to better respond to other, asymmetric signals in the data.\n\nIn experiments, we used co-purchase data to compare our model’s performance to that of three leading CPR systems. We scored the models’ recommendations according to the frequency with which their recommended products were co-purchased with the input product.\n\nOn two different data sets — electronics and grocery — and three different accuracy measures — the accuracy of the top recommendation, the top three recommendations, and the top ten recommendations — our model outperformed the others across the board.\n\nABOUT THE AUTHOR\n\n#### **[Tong Zhao](https://www.amazon.science/author/tong-zhao)**\n\nTong Zhao is a senior applied scientist in the Amazon Product Graph group.\n\n","render":"<p>One way that e-commerce sites make life easier for customers is by recommending products that complement whatever the customer is looking for: someone buying a tennis racket, for instance, may also want to buy tennis balls; someone buying a camera may want an SD card for extra storage.</p>\n<p>At this year’s <a href=\\"https://www.amazon.science/conferences-and-events/cikm-2020\\" target=\\"_blank\\">Conference on Information and Knowledge Management</a>, my colleagues at the University of California, Los Angeles, and Amazon and I will present a <a href=\\"https://www.amazon.science/publications/p-companion-a-principled-framework-for-diversified-complementary-product-recommendation\\" target=\\"_blank\\">new deep-learning-based method</a> for doing complementary-product recommendation (CPR) that, in our tests, was 7% more likely to find a product that the customer wanted to buy than existing methods.</p>\\n<p>That improvement comes from three main strategies: better selection of training data for the CPR model; greater diversity in the types of products recommended; and respect for the asymmetry of the CPR problem (while an SD card may a be a good product to complement a camera, a camera is not a good product to complement an SD card).</p>\n<p>Our approach also addresses the problem of cold start, or predicting complementary products for items that were added to the product catalogue after the machine learning model was trained. To do that, we use an embedding scheme developed at Amazon, called Product2vec, to represent the inputs to the CPR model — the products we seek to complement — according to their attributes and their relationships with other products, rather than simply using their names or ID numbers.</p>\n<h4><a id=\\"Implicit_signals_8\\"></a><strong>Implicit signals</strong></h4>\\n<p>For training data, our model, like most other CPR models, relies on implicit signals from customers. We consider three ways that product x might be related to product y: co-purchase, meaning customers who purchased 𝑥 also purchased y; co-view, meaning customers who viewed x also viewed y; and purchase after view, meaning customers who viewed x eventually bought y.</p>\n<p>CPR models typically use co-views and purchase after view as an indication of similarity and co-purchase as an indication of complementarity. But there is considerable overlap between these three categories.</p>\n<p>Our intuition was that training a CPR model on product pairs that show up in the co-purchase data but not in the co-view and purchase-after-view data would lead to better predictions.</p>\n<p>User studies in which participants rated pairs of products as substitutable, complementary, or irrelevant bore out this intuition: the complementarity ratings of co-purchase-only product pairs were 30% higher than those of co-purchase product pairs that also showed up in the co-view and purchase-after-view data. Accordingly, we used co-purchase-only product pairs to train our model.</p>\n<p>The inputs to our model are Product2vec embedding vectors. Embeddings represent data items as points in a multidimensional space, such that proximity in the space indicates some relationship between the items. In our case, that relationship is similarity: points representing different brands of tennis rackets should cluster together in the space, as should points representing cameras, and so on.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/8ad2e31b15874106a8e5985feeb89798_image.png\\" alt=\\"image.png\\" /></p>\n<p>In our graphical representation of relationships between products, each node includes information such as a product’s category, type, and image, and edges represent relationships such as the co-viewing and co-purchase of products.</p>\n<p>Product2vec on pairs of products that show up in the co-view and purchase-after-view data but not in the co-purchase data. The idea is that customers might view variations of the same product before selecting one for purchase, but co-purchased products are likely to be complementary rather than similar.</p>\n<p>In the same way that we train our CPR model on co-purchase-only data, we train Product2vec on pairs of products that show up in the co-view and purchase-after-view data but not in the co-purchase data. The idea is that customers might view variations of the same product before selecting one for purchase, but co-purchased products are likely to be complementary rather than similar.</p>\n<p>Product2vec embedding helps solve the cold-start problem, as it will produce a meaningful embedding even for products it hasn’t seen before.</p>\n<h4><a id=\\"Diversification_30\\"></a><strong>Diversification</strong></h4>\\n<p>CPR models are typically trained to output the most frequent co-purchases for each input product. But this can lead to homogeneity of outputs: the top three co-purchases for a tennis racket, for instance, might be three different brands of tennis balls. We believe that customers would prefer more-diverse complementary-product recommendations: for instance, the top three recommendations for a tennis racket should be something like a can of tennis balls, a pack of overgrips, and a headband.</p>\n<p>We enforce diversity through our model architecture. For every input product, we pass its product-type embedding through a neural network (the type transition network) that outputs the embeddings of complementary product types. Each of those embeddings is then concatenated with the embedding of the input product before passing to the module that generates the recommendations (the type-item prediction module).</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/181b6e17191140d6b6db6665b2c08d59_image.png\\" alt=\\"image.png\\" /></p>\n<p>The architecture of our model. For each input, the type transition module outputs a set of vectors representing complementary product types. These are combined with the representation of the input product before it passes to the type-item prediction module, to ensure diversity in the model’s outputs.</p>\n<p>In our graphical representation of relationships between products, each node includes information such as a product’s category, type, and image, and edges represent relationships such as the co-viewing and co-purchase of products.</p>\n<p>The whole model is trained end to end: that is, during training, the type transition network is evaluated solely according to the accuracy of the type-item prediction module’s outputs. But each output of the type transition network is associated with a single output of the type-item prediction module, which naturally leads to greater type diversity among recommendations.</p>\n<p>The addition of the type transition network also breaks the symmetry between related products that can cause problems for the typical CPR system. The typical system bases its judgments of complementarity on proximity in the embedding space. But in that space, an SD card is as close to a camera as a camera is to an SD card.</p>\n<p>The type transition network, however, learns to output different product-type embeddings for cameras and SD cards, which enables our model to better respond to other, asymmetric signals in the data.</p>\n<p>In experiments, we used co-purchase data to compare our model’s performance to that of three leading CPR systems. We scored the models’ recommendations according to the frequency with which their recommended products were co-purchased with the input product.</p>\n<p>On two different data sets — electronics and grocery — and three different accuracy measures — the accuracy of the top recommendation, the top three recommendations, and the top ten recommendations — our model outperformed the others across the board.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Tong_Zhaohttpswwwamazonscienceauthortongzhao_54\\"></a><strong><a href=\\"https://www.amazon.science/author/tong-zhao\\" target=\\"_blank\\">Tong Zhao</a></strong></h4>\n<p>Tong Zhao is a senior applied scientist in the Amazon Product Graph group.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭