A general approach to solving bandit problems

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Bandit problems are problems in which an agent interacting with the environment tries to simultaneously maximize some reward and learn how to maximize that reward. The name comes from the situation of a gambler trying to discover which of a casino’s slot machines — its one-armed bandits — offers the best payout, while minimizing the amount of money fed into machines that offer a lower chance of winning.\n\n![image.png](https://dev-media.amazoncloud.cn/449b415cdb3848e9a2633926524a56ae_image.png)\n\nBandit problems take their name from one-armed bandits, or slot machines. The problem structure is that of a gambler trying to discover which of a casino’s slot machines offers the best payout, while minimizing the amount of money fed into machines that offer a lower chance of winning.\n\nCREDIT: MBBIRDY/GETTY IMAGES/ISTOCKPHOTO\n\nBandit problems arise in a wide range of contexts, but designing and deploying machine learning systems to solve them is often too complicated to be practical. So we’ve developed a simple, flexible framework for solving bandit problems, which lets us bring the benefits of powerful statistical tools to applications that are not as high-impact as ranking content on the Amazon home page but still make a difference in the quality of our customers’ experiences.\n\nAt this year’s International Conference on Information and Knowledge Management (CIKM), we are presenting two applications of this framework, as an initial demonstration of its flexibility and ease of use. In ongoing work, we’re applying the framework to other problems, as well.\n\nThe [first](https://www.amazon.science/publications/learning-to-rank-in-the-position-based-model-with-bandit-feedback\\n) CIKM paper is about the learning-to-rank problem, or determining the order in which a list of items should be presented to a customer. The classic learning-to-rank problem focuses on ordering search results, but the same methods apply to any presentation of information, such as the layout of a web page or ranking music recommendations.\n\nThe [second](https://www.amazon.science/publications/personalizing-natural-language-understanding-using-multi-armed-bandits-and-implicit-feedback) paper is about a specific application of learning-to-rank to natural-language understanding (NLU), of the kind Alexa does when processing customer requests. Where an utterance has multiple possible NLU interpretations, learning-to-rank allows us to select the best one for a specific customer. If, for instance, a customer says to Alexa, “Play Dark Side of the Moon”, it’s unclear whether this refers to the album by Pink Floyd or the song by Lil Wayne. Alexa’s NLU models output lists of possible NLU interpretations, scored by probability. Our system re-ranks those lists on the basis of individual customers’ listening histories.\n\nWe tested our learning-to-rank approach by using it to determine the order in which Amazon Music presents music recommendations to customers. Compared to a learning-to-rank algorithm that uses matrix factorization, our method led to a 7.6% increase in the frequency with which customers selected one of the recommended songs for playback and a 7.2% increase in the listening duration of the selected songs.\n\nSimilarly, we tested the NLU re-ranking system on spoken-language music requests, using accepted playbacks as an implicit signal that a song was correctly selected.\n\nThe re-ranking was limited to a relatively small percentage of traffic, where the top NLU interpretation had not worked well in the past. On those requests, we observed a dramatic increase in accepted playbacks, in the range of 50% to 70%.\n\n#### **Where's the action?**\n\nOur framework models each interaction in the bandit setting as the ordering of a finite list of given actions. An action could be playing a song or displaying a search result or a layout element at a particular location on-screen.\n\nWe model each action as a vector of fixed length. This permits the later addition of actions that are not yet known when the model is created. The vector may also include contextual information, which can lead the model to make different choices in different situations. For instance, if a customer says to a voice agent, “Play exile”, the model might rank either the Taylor Swift song “Exile” or music by the band Exile higher, depending on what the contextual information indicates about the customer’s listening history.\n\nAfter the model presents its list of actions, it receives feedback about one or more of the actions. If the voice agent plays a song, and the customer cuts it off after only a few seconds, that’s an indication of dissatisfaction with the song choice. If a website presents the customer with a list of song options, and the customer clicks on three of them, that’s an indication that those songs should have been at the top of the list.\n\nIn the bandit setting, the goal is to both explore the environment — to learn what actions elicit the greatest rewards — and exploit the knowledge gained — to maximize the reward. After each interaction with the environment, the agent has new information on which to base the ordering of its next list. The idea is to select the sequence of orderings that best manages the explore/exploit trade-off.\n\nIn our CIKM papers, we adapted two well-known learning algorithms to work with our bandit model: the upper-confidence-bound (UCB) algorithm and Thompson sampling. But the framework is flexible enough permit the use of other algorithms as well.\n\nIn the [learning-to-rank](https://www.amazon.science/publications/learning-to-rank-in-the-position-based-model-with-bandit-feedback) paper, we extended our model to account for position bias, or the influence of an item’s position in a list on the customer’s decision to select it: items toward the top of a list tend to be selected more frequently, even if they’re not the best matches for the customer’s query. We thus model the probability that an item is selected as a combination of both its relevance to the query and its position on the list.\n\nIn the [NLU interpretation](https://www.amazon.science/publications/personalizing-natural-language-understanding-using-multi-armed-bandits-and-implicit-feedback) paper, the crucial adaptation was determining which contextual information to include in the action vector. The popularity of the song or album to be played is one such factor, as is an indication of the customer’s “affinity” for the artist, based on listening history.\n\nInterested readers can consult the papers for more details. But these are just two illustrative applications of a framework we are using to improve the quality of the experiences we provide our customers.\n\nABOUT THE AUTHOR\n\n#### **[Yannik Stein](https://www.amazon.science/author/yannik-stein)**\n\nYannik Stein is a machine learning engineer at Amazon.\n\n#### **[Fabian Moerchen](https://www.amazon.science/author/fabian-moerchen)**\n\nFabian Moerchen is a principal applied scientist at Amazon.\n\n\n\n\n","render":"<p>Bandit problems are problems in which an agent interacting with the environment tries to simultaneously maximize some reward and learn how to maximize that reward. The name comes from the situation of a gambler trying to discover which of a casino’s slot machines — its one-armed bandits — offers the best payout, while minimizing the amount of money fed into machines that offer a lower chance of winning.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/449b415cdb3848e9a2633926524a56ae_image.png\\" alt=\\"image.png\\" /></p>\n<p>Bandit problems take their name from one-armed bandits, or slot machines. The problem structure is that of a gambler trying to discover which of a casino’s slot machines offers the best payout, while minimizing the amount of money fed into machines that offer a lower chance of winning.</p>\n<p>CREDIT: MBBIRDY/GETTY IMAGES/ISTOCKPHOTO</p>\n<p>Bandit problems arise in a wide range of contexts, but designing and deploying machine learning systems to solve them is often too complicated to be practical. So we’ve developed a simple, flexible framework for solving bandit problems, which lets us bring the benefits of powerful statistical tools to applications that are not as high-impact as ranking content on the Amazon home page but still make a difference in the quality of our customers’ experiences.</p>\n<p>At this year’s International Conference on Information and Knowledge Management (CIKM), we are presenting two applications of this framework, as an initial demonstration of its flexibility and ease of use. In ongoing work, we’re applying the framework to other problems, as well.</p>\n<p>The <a href=\\"https://www.amazon.science/publications/learning-to-rank-in-the-position-based-model-with-bandit-feedback\\" target=\\"_blank\\">first</a> CIKM paper is about the learning-to-rank problem, or determining the order in which a list of items should be presented to a customer. The classic learning-to-rank problem focuses on ordering search results, but the same methods apply to any presentation of information, such as the layout of a web page or ranking music recommendations.</p>\\n<p>The <a href=\\"https://www.amazon.science/publications/personalizing-natural-language-understanding-using-multi-armed-bandits-and-implicit-feedback\\" target=\\"_blank\\">second</a> paper is about a specific application of learning-to-rank to natural-language understanding (NLU), of the kind Alexa does when processing customer requests. Where an utterance has multiple possible NLU interpretations, learning-to-rank allows us to select the best one for a specific customer. If, for instance, a customer says to Alexa, “Play Dark Side of the Moon”, it’s unclear whether this refers to the album by Pink Floyd or the song by Lil Wayne. Alexa’s NLU models output lists of possible NLU interpretations, scored by probability. Our system re-ranks those lists on the basis of individual customers’ listening histories.</p>\\n<p>We tested our learning-to-rank approach by using it to determine the order in which Amazon Music presents music recommendations to customers. Compared to a learning-to-rank algorithm that uses matrix factorization, our method led to a 7.6% increase in the frequency with which customers selected one of the recommended songs for playback and a 7.2% increase in the listening duration of the selected songs.</p>\n<p>Similarly, we tested the NLU re-ranking system on spoken-language music requests, using accepted playbacks as an implicit signal that a song was correctly selected.</p>\n<p>The re-ranking was limited to a relatively small percentage of traffic, where the top NLU interpretation had not worked well in the past. On those requests, we observed a dramatic increase in accepted playbacks, in the range of 50% to 70%.</p>\n<h4><a id=\\"Wheres_the_action_23\\"></a><strong>Where’s the action?</strong></h4>\\n<p>Our framework models each interaction in the bandit setting as the ordering of a finite list of given actions. An action could be playing a song or displaying a search result or a layout element at a particular location on-screen.</p>\n<p>We model each action as a vector of fixed length. This permits the later addition of actions that are not yet known when the model is created. The vector may also include contextual information, which can lead the model to make different choices in different situations. For instance, if a customer says to a voice agent, “Play exile”, the model might rank either the Taylor Swift song “Exile” or music by the band Exile higher, depending on what the contextual information indicates about the customer’s listening history.</p>\n<p>After the model presents its list of actions, it receives feedback about one or more of the actions. If the voice agent plays a song, and the customer cuts it off after only a few seconds, that’s an indication of dissatisfaction with the song choice. If a website presents the customer with a list of song options, and the customer clicks on three of them, that’s an indication that those songs should have been at the top of the list.</p>\n<p>In the bandit setting, the goal is to both explore the environment — to learn what actions elicit the greatest rewards — and exploit the knowledge gained — to maximize the reward. After each interaction with the environment, the agent has new information on which to base the ordering of its next list. The idea is to select the sequence of orderings that best manages the explore/exploit trade-off.</p>\n<p>In our CIKM papers, we adapted two well-known learning algorithms to work with our bandit model: the upper-confidence-bound (UCB) algorithm and Thompson sampling. But the framework is flexible enough permit the use of other algorithms as well.</p>\n<p>In the <a href=\\"https://www.amazon.science/publications/learning-to-rank-in-the-position-based-model-with-bandit-feedback\\" target=\\"_blank\\">learning-to-rank</a> paper, we extended our model to account for position bias, or the influence of an item’s position in a list on the customer’s decision to select it: items toward the top of a list tend to be selected more frequently, even if they’re not the best matches for the customer’s query. We thus model the probability that an item is selected as a combination of both its relevance to the query and its position on the list.</p>\\n<p>In the <a href=\\"https://www.amazon.science/publications/personalizing-natural-language-understanding-using-multi-armed-bandits-and-implicit-feedback\\" target=\\"_blank\\">NLU interpretation</a> paper, the crucial adaptation was determining which contextual information to include in the action vector. The popularity of the song or album to be played is one such factor, as is an indication of the customer’s “affinity” for the artist, based on listening history.</p>\\n<p>Interested readers can consult the papers for more details. But these are just two illustrative applications of a framework we are using to improve the quality of the experiences we provide our customers.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Yannik_Steinhttpswwwamazonscienceauthoryannikstein_43\\"></a><strong><a href=\\"https://www.amazon.science/author/yannik-stein\\" target=\\"_blank\\">Yannik Stein</a></strong></h4>\n<p>Yannik Stein is a machine learning engineer at Amazon.</p>\n<h4><a id=\\"Fabian_Moerchenhttpswwwamazonscienceauthorfabianmoerchen_47\\"></a><strong><a href=\\"https://www.amazon.science/author/fabian-moerchen\\" target=\\"_blank\\">Fabian Moerchen</a></strong></h4>\n<p>Fabian Moerchen is a principal applied scientist at Amazon.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭