NeurIPS: Shipra Agrawal on the appeal of reinforcement learning

机器学习
强化学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"![image.png](https://dev-media.amazoncloud.cn/f472043ffa264e098136d6ab7c1d042d_image.png)\n\nShipra Agrawal, Amazon Scholar and assistant professor in Columbia University’s Industrial Engineering and Operations Research Department.\n\nAs deep neural networks have come to dominate AI, the Conference on Neural Information Processing Systems (++[NeurIPS 2020](https://www.amazon.science/conferences-and-events/neurips-2020)++) has become the most popular conference in the field.\n\nAnd at the most popular conference in the field, one of the most popular topics is *reinforcement learning*: at this year’s NeurIPS, 95 accepted papers use the term in their titles.\n\n“Reinforcement learning is very, very powerful, because you can kind of learn anything, adaptively from the feedback, and by exploring the decision space,” says ++[Shipra Agrawal](https://www.amazon.science/research-awards/recipients/shipra-agrawal)++, an ++[Amazon Scholar](https://www.amazon.science/scholars)++, an assistant professor in Columbia University’s Industrial Engineering and Operations Research Department, and an area chair at NeurIPS, who studies reinforcement learning. “In concept, it's very akin to how humans learn, by trial and error, and how they adapt to what they see — without requiring a loss function and so on, just by some kind of rewards or positive feedback.”\n\nIn reinforcement learning, an agent explores its environment, trying out different responses to different states of affairs, gradually learning a set of policies that will enable it to maximize some reward.\n\n“Usually, when we say ‘classification’ or ‘regression’, it's a very specific pattern recognition problem, and then you have to think carefully about where you can apply it,” Agrawal says. “But when you say ‘reinforcement learning,’ at a high level, it's like, ‘Oh, I will just make some decisions, see some feedback, and then I'll change my decisions accordingly.’ That's it.\n\n“But theoretically, it also has a lot of structure, which makes it possible to analyze it in a rigorous manner. So I think there's this unique combination of theoretical curiosity and practical potential that makes it very attractive. I teach a course on reinforcement learning [at Columbia], which is a PhD class of 40, and it usually has a waiting list of 100.”\n\n\n#### **Markov decision processes**\n\n\nWhat gives the problem of reinforcement learning its theoretical structure, Agrawal explains, is a model of sequential decision making called a Markov decision process (MDP). An MDP consists of a sequence of decisions, and each decision elicits feedback: if, for instance, the decision is what products to display on a shopping site, the feedback might be whether a customer buys one of the displayed products. \n\nEach new decision is based not only on the feedback to the previous decision but on the whole history of decisions and feedback that preceded it. That history is captured in a variable called the state.\n\n“That makes the decision-making problem complex, because now you need to think not only about what decisions would generate good immediate feedback but also what decisions could take you to good states,” Agrawal says. “A decision may produce a good immediate reaction, but it may take the customer to a bad state later on. The Markov decision process is specified by the state transition model and the reward model, because both of them matter when you're deciding what is a good policy to make decisions.”\n\nFraming the problem of reinforcement learning in terms of MDPs is straightforward, Agrawal says. “Just take MDP and say, ‘I don't know the state transition model, and I don't know the reward model, and I need to learn them from the feedback itself,” she says. “I can observe the state at the last time step, I know what the state is right now, and if I've seen enough of these transitions, I can infer what the state transition model must have been. In the same way, I can infer what the reward model must have been. The challenge is to combine this learning with the task of finding a good policy.”\n\nMDPs have been studied since at least the 1950s. But, Agrawal says, “once you make machine learning a central feature in that problem, then suddenly, the goal shifts. How do I properly combine these two things? The initial solutions were based on a two-phase process: you learn the model first, and then you optimize it. But that doesn't scale. How well do you need to learn the model? You can figure out how well you need to learn it in the worst case, but that’s too much: you can't really learn the model that well, everywhere. And then you say, ‘Well, I don't need to learn it everywhere.’ Who cares what happens in some situation which is never going to occur or which I never really want to go into anyway? Conceptually, the problem becomes very, very different once you combine these two things.”\n\n\n#### **Explore/exploit**\n\n\nIn her own work, Agrawal is exploring another trade-off posed by reinforcement learning: the explore-exploit dilemma. During reinforcement learning, the agent is trying to learn how to maximize some reward (exploration) — but it’s also trying to maximize the reward (exploitation). How does it decide when to risk an untried response that may provide useful information but may also reduce its reward?\n\nThe explore-exploit dilemma can be modeled as a ++[*bandit problem*](https://www.amazon.science/blog/a-general-approach-to-solving-bandit-problems)++, named after the situation of a gambler trying to identify the one slot machine (one-armed bandit) among many with the highest average payout. Bandit problems are another popular topic at NeurIPS: the word “bandit” shows up in the titles of 45 accepted papers.\n\n“A lot of my work is based on this idea of posterior sampling — Thompson sampling — which is a Bayesian way to do exploration/exploitation,” Agrawal says. “It's a theoretically rigorous method — you can prove guarantees on it — and at the same time, with posterior sampling, the cost of exploration is often as low as in some practically good, easy methods. So it kind of provides best of both worlds, at least in bandits. I'm very excited about taking these techniques and using them for reinforcement learning, especially for deep reinforcement learning.”\n\n*Learn more about Amazon researchers' involvement in ++[NeurIPS 2020](https://www.amazon.science/conferences-and-events/neurips-2020)++, the papers they're presenting, and the workshops that Amazon is sponsoring and that our researchers are organizing and attending, and sign up to participate in a virtual panel on fairness in AI on December 11.*\n\nABOUT THE AUTHOR\n\n\n#### **[Larry Hardesty](https://www.amazon.science/author/larry-hardesty)**\n\n\nLarry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.","render":"<p><img src=\\"https://dev-media.amazoncloud.cn/f472043ffa264e098136d6ab7c1d042d_image.png\\" alt=\\"image.png\\" /></p>\n<p>Shipra Agrawal, Amazon Scholar and assistant professor in Columbia University’s Industrial Engineering and Operations Research Department.</p>\n<p>As deep neural networks have come to dominate AI, the Conference on Neural Information Processing Systems (<ins><a href=\\"https://www.amazon.science/conferences-and-events/neurips-2020\\" target=\\"_blank\\">NeurIPS 2020</a></ins>) has become the most popular conference in the field.</p>\n<p>And at the most popular conference in the field, one of the most popular topics is <em>reinforcement learning</em>: at this year’s NeurIPS, 95 accepted papers use the term in their titles.</p>\\n<p>“Reinforcement learning is very, very powerful, because you can kind of learn anything, adaptively from the feedback, and by exploring the decision space,” says <ins><a href=\\"https://www.amazon.science/research-awards/recipients/shipra-agrawal\\" target=\\"_blank\\">Shipra Agrawal</a></ins>, an <ins><a href=\\"https://www.amazon.science/scholars\\" target=\\"_blank\\">Amazon Scholar</a></ins>, an assistant professor in Columbia University’s Industrial Engineering and Operations Research Department, and an area chair at NeurIPS, who studies reinforcement learning. “In concept, it’s very akin to how humans learn, by trial and error, and how they adapt to what they see — without requiring a loss function and so on, just by some kind of rewards or positive feedback.”</p>\n<p>In reinforcement learning, an agent explores its environment, trying out different responses to different states of affairs, gradually learning a set of policies that will enable it to maximize some reward.</p>\n<p>“Usually, when we say ‘classification’ or ‘regression’, it’s a very specific pattern recognition problem, and then you have to think carefully about where you can apply it,” Agrawal says. “But when you say ‘reinforcement learning,’ at a high level, it’s like, ‘Oh, I will just make some decisions, see some feedback, and then I’ll change my decisions accordingly.’ That’s it.</p>\n<p>“But theoretically, it also has a lot of structure, which makes it possible to analyze it in a rigorous manner. So I think there’s this unique combination of theoretical curiosity and practical potential that makes it very attractive. I teach a course on reinforcement learning [at Columbia], which is a PhD class of 40, and it usually has a waiting list of 100.”</p>\n<h4><a id=\\"Markov_decision_processes_17\\"></a><strong>Markov decision processes</strong></h4>\\n<p>What gives the problem of reinforcement learning its theoretical structure, Agrawal explains, is a model of sequential decision making called a Markov decision process (MDP). An MDP consists of a sequence of decisions, and each decision elicits feedback: if, for instance, the decision is what products to display on a shopping site, the feedback might be whether a customer buys one of the displayed products.</p>\n<p>Each new decision is based not only on the feedback to the previous decision but on the whole history of decisions and feedback that preceded it. That history is captured in a variable called the state.</p>\n<p>“That makes the decision-making problem complex, because now you need to think not only about what decisions would generate good immediate feedback but also what decisions could take you to good states,” Agrawal says. “A decision may produce a good immediate reaction, but it may take the customer to a bad state later on. The Markov decision process is specified by the state transition model and the reward model, because both of them matter when you’re deciding what is a good policy to make decisions.”</p>\n<p>Framing the problem of reinforcement learning in terms of MDPs is straightforward, Agrawal says. “Just take MDP and say, ‘I don’t know the state transition model, and I don’t know the reward model, and I need to learn them from the feedback itself,” she says. “I can observe the state at the last time step, I know what the state is right now, and if I’ve seen enough of these transitions, I can infer what the state transition model must have been. In the same way, I can infer what the reward model must have been. The challenge is to combine this learning with the task of finding a good policy.”</p>\n<p>MDPs have been studied since at least the 1950s. But, Agrawal says, “once you make machine learning a central feature in that problem, then suddenly, the goal shifts. How do I properly combine these two things? The initial solutions were based on a two-phase process: you learn the model first, and then you optimize it. But that doesn’t scale. How well do you need to learn the model? You can figure out how well you need to learn it in the worst case, but that’s too much: you can’t really learn the model that well, everywhere. And then you say, ‘Well, I don’t need to learn it everywhere.’ Who cares what happens in some situation which is never going to occur or which I never really want to go into anyway? Conceptually, the problem becomes very, very different once you combine these two things.”</p>\n<h4><a id=\\"Exploreexploit_31\\"></a><strong>Explore/exploit</strong></h4>\\n<p>In her own work, Agrawal is exploring another trade-off posed by reinforcement learning: the explore-exploit dilemma. During reinforcement learning, the agent is trying to learn how to maximize some reward (exploration) — but it’s also trying to maximize the reward (exploitation). How does it decide when to risk an untried response that may provide useful information but may also reduce its reward?</p>\n<p>The explore-exploit dilemma can be modeled as a <ins><a href=\\"https://www.amazon.science/blog/a-general-approach-to-solving-bandit-problems\\" target=\\"_blank\\"><em>bandit problem</em></a></ins>, named after the situation of a gambler trying to identify the one slot machine (one-armed bandit) among many with the highest average payout. Bandit problems are another popular topic at NeurIPS: the word “bandit” shows up in the titles of 45 accepted papers.</p>\\n<p>“A lot of my work is based on this idea of posterior sampling — Thompson sampling — which is a Bayesian way to do exploration/exploitation,” Agrawal says. “It’s a theoretically rigorous method — you can prove guarantees on it — and at the same time, with posterior sampling, the cost of exploration is often as low as in some practically good, easy methods. So it kind of provides best of both worlds, at least in bandits. I’m very excited about taking these techniques and using them for reinforcement learning, especially for deep reinforcement learning.”</p>\n<p><em>Learn more about Amazon researchers’ involvement in <ins><a href=\\"https://www.amazon.science/conferences-and-events/neurips-2020\\" target=\\"_blank\\">NeurIPS 2020</a></ins>, the papers they’re presenting, and the workshops that Amazon is sponsoring and that our researchers are organizing and attending, and sign up to participate in a virtual panel on fairness in AI on December 11.</em></p>\\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Larry_Hardestyhttpswwwamazonscienceauthorlarryhardesty_45\\"></a><strong><a href=\\"https://www.amazon.science/author/larry-hardesty\\" target=\\"_blank\\">Larry Hardesty</a></strong></h4>\n<p>Larry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭