Multi-agent reinforcement learning for an uncertain world

机器学习
强化学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"++[Reinforcement learning](https://www.amazon.science/blog/neurips-shipra-agrawal-on-the-appeal-of-reinforcement-learning)++ (RL), in which an agent learns to maximize some reward through trial-and-error exploration of its environment, is a hot topic in AI. In recent years, it’s led to breakthroughs in robotics, autonomous driving, and game playing, among other applications.\n\nOften, RL agents are trained in simulations before being released into the real world. But simulations are rarely perfect, and an agent that doesn’t know how to explicitly model its uncertainty about the world will often flounder outside the training environment.\n\nSuch uncertainty has been nicely handled in the case of single-agent RL. But it hasn’t been as thoroughly explored in the case of multi-agent RL (MARL), where multiple agents are trying to optimize their own long-term rewards by interacting with the environment and with each other.\n\n![image.png](https://dev-media.amazoncloud.cn/13563832f66247d1b25bafd6b2efdc48_image.png)\n\nIn simulations of the cooperative-navigation task, in which three agents (purple) work together to occupy three landmarks (black dots), the researchers' new approach (center) consistently outperformed its predecessors when uncertainty was high. (Video below.)\n\nCREDIT: TAO SUN\n\nIn a ++[paper](https://www.amazon.science/publications/robust-multi-agent-reinforcement-learning-with-model-uncertainty)++ we are presenting at the 34^th^ Conference on Neural Information Processing Systems, we propose a MARL framework that is robust to the possible uncertainty of the model. In experiments that used state-of-the-art systems as benchmarks, our approach accumulated higher rewards at higher uncertainty. \n\nFor example, in cooperative navigation, in which three agents locate and occupy three distinct landmarks, our robust MARL agents perform significantly better than state-of-the-art system when uncertainty was high. In the predator-prey environment, in which predator agents attempt to “catch” (touch) prey agents, our robust MARL agents outperform the baseline agents regardless of whether they are predator or prey.\n\n\n#### **Markov games**\n\n\nReinforcement learning is typically modeled using a sequential decision process called a Markov decision process, which has several components: a *state* space, an *action* space, *transition* dynamics, and a *reward* function. \n\nAt each time step, the agent takes an action and transitions to a new state, according to a transition probability. Each action incurs a reward or penalty. By trying out sequences of actions, the agent develops a set of *policies* that optimize its cumulative reward.\n\nMarkov *games* generalize this model to the multi-agent setting. In a Markov game, state transitions are the result of multiple actions taken by multiple agents, and each agent has its own reward function.\n\nTo maximize its cumulative reward, a given agent must navigate not only the environment but also the actions of its fellow agents. So in addition to learning its own set of policies, it also tries to infer the policies of the other agents.\n\nIn many real-world applications, however, perfect information is impossible. If multiple self-driving cars are sharing the road, no one of them can know exactly what rewards the others are maximizing or what the joint transition model is. In such situations, the policy a given agent adopts should be robust to the possible uncertainty of the MARL model.\n\nIn the framework we present in our paper, each player considers a distribution-free Markov game — a game in which the probability distribution that describes the environment is unknown. Consequently, the player doesn’t seek to learn specific reward and state values but rather a range of possible values, known as the uncertainty set. Using uncertainty sets means that the player doesn’t need to explicitly model its uncertainty with another probability distribution.\n\n\n#### **Uncertainty as agency**\n\n\nWe treat uncertainty as an adversarial agent — nature — whose policies are designed to produce the worst-case model data for the other agents at every state. Treating uncertainty as another player allows us to define a robust Markov perfect Nash equilibrium for the game: a set of policies such that — given the possible uncertainty of the model — no player has an incentive to change its policy unilaterally. \n\nTo prove the utility of this adversarial approach, we first propose using a Q-learning-based algorithm, which, under certain conditions, is guaranteed to converge to the Nash equilibrium. Q-learning is a model-free RL algorithm, meaning that it doesn’t need to learn explicit transition probabilities and reward functions. Instead, it attempts to learn the expected cumulative reward for each set of actions in each state.\n\nIf the space of possible states and actions becomes large enough, however, learning the cumulative rewards of all actions in all states becomes impractical. The alternative is to use function approximation to estimate state values and policies, but integrating function approximation into Q-learning is difficult.\n\nSo in our paper, we also develop a policy-gradient/actor-critic-based robust MARL algorithm. This algorithm doesn’t provide the same convergence guarantees that Q-learning does, but it makes it easier to use function approximation.\n\nThis is the MARL framework we used in our experiments. We tested our approach against two state-of-the-art systems, one that was designed for the adversarial setting and one that wasn’t, on a range of standard MARL tasks: cooperative navigation, keep-away, physical deception, and the predator-prey environments. In settings with realistic degrees of uncertainty, our approach outperformed the others across the board.\n\nABOUT THE AUTHOR\n\n\n#### **[Sahika Genc](https://www.amazon.science/author/sahika-genc)**\n\n\nSahika Genc is a principal applied scientist within Amazon AI. Her team works on reinforcement learning algorithms for [Amazon SageMaker](https://aws.amazon.com/cn/sagemaker/?trk=cndc-detail).","render":"<p><ins><a href=\\"https://www.amazon.science/blog/neurips-shipra-agrawal-on-the-appeal-of-reinforcement-learning\\" target=\\"_blank\\">Reinforcement learning</a></ins> (RL), in which an agent learns to maximize some reward through trial-and-error exploration of its environment, is a hot topic in AI. In recent years, it’s led to breakthroughs in robotics, autonomous driving, and game playing, among other applications.</p>\n<p>Often, RL agents are trained in simulations before being released into the real world. But simulations are rarely perfect, and an agent that doesn’t know how to explicitly model its uncertainty about the world will often flounder outside the training environment.</p>\n<p>Such uncertainty has been nicely handled in the case of single-agent RL. But it hasn’t been as thoroughly explored in the case of multi-agent RL (MARL), where multiple agents are trying to optimize their own long-term rewards by interacting with the environment and with each other.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/13563832f66247d1b25bafd6b2efdc48_image.png\\" alt=\\"image.png\\" /></p>\n<p>In simulations of the cooperative-navigation task, in which three agents (purple) work together to occupy three landmarks (black dots), the researchers’ new approach (center) consistently outperformed its predecessors when uncertainty was high. (Video below.)</p>\n<p>CREDIT: TAO SUN</p>\n<p>In a <ins><a href=\\"https://www.amazon.science/publications/robust-multi-agent-reinforcement-learning-with-model-uncertainty\\" target=\\"_blank\\">paper</a></ins> we are presenting at the 34<sup>th</sup> Conference on Neural Information Processing Systems, we propose a MARL framework that is robust to the possible uncertainty of the model. In experiments that used state-of-the-art systems as benchmarks, our approach accumulated higher rewards at higher uncertainty.</p>\\n<p>For example, in cooperative navigation, in which three agents locate and occupy three distinct landmarks, our robust MARL agents perform significantly better than state-of-the-art system when uncertainty was high. In the predator-prey environment, in which predator agents attempt to “catch” (touch) prey agents, our robust MARL agents outperform the baseline agents regardless of whether they are predator or prey.</p>\n<h4><a id=\\"Markov_games_17\\"></a><strong>Markov games</strong></h4>\\n<p>Reinforcement learning is typically modeled using a sequential decision process called a Markov decision process, which has several components: a <em>state</em> space, an <em>action</em> space, <em>transition</em> dynamics, and a <em>reward</em> function.</p>\\n<p>At each time step, the agent takes an action and transitions to a new state, according to a transition probability. Each action incurs a reward or penalty. By trying out sequences of actions, the agent develops a set of <em>policies</em> that optimize its cumulative reward.</p>\\n<p>Markov <em>games</em> generalize this model to the multi-agent setting. In a Markov game, state transitions are the result of multiple actions taken by multiple agents, and each agent has its own reward function.</p>\\n<p>To maximize its cumulative reward, a given agent must navigate not only the environment but also the actions of its fellow agents. So in addition to learning its own set of policies, it also tries to infer the policies of the other agents.</p>\n<p>In many real-world applications, however, perfect information is impossible. If multiple self-driving cars are sharing the road, no one of them can know exactly what rewards the others are maximizing or what the joint transition model is. In such situations, the policy a given agent adopts should be robust to the possible uncertainty of the MARL model.</p>\n<p>In the framework we present in our paper, each player considers a distribution-free Markov game — a game in which the probability distribution that describes the environment is unknown. Consequently, the player doesn’t seek to learn specific reward and state values but rather a range of possible values, known as the uncertainty set. Using uncertainty sets means that the player doesn’t need to explicitly model its uncertainty with another probability distribution.</p>\n<h4><a id=\\"Uncertainty_as_agency_33\\"></a><strong>Uncertainty as agency</strong></h4>\\n<p>We treat uncertainty as an adversarial agent — nature — whose policies are designed to produce the worst-case model data for the other agents at every state. Treating uncertainty as another player allows us to define a robust Markov perfect Nash equilibrium for the game: a set of policies such that — given the possible uncertainty of the model — no player has an incentive to change its policy unilaterally.</p>\n<p>To prove the utility of this adversarial approach, we first propose using a Q-learning-based algorithm, which, under certain conditions, is guaranteed to converge to the Nash equilibrium. Q-learning is a model-free RL algorithm, meaning that it doesn’t need to learn explicit transition probabilities and reward functions. Instead, it attempts to learn the expected cumulative reward for each set of actions in each state.</p>\n<p>If the space of possible states and actions becomes large enough, however, learning the cumulative rewards of all actions in all states becomes impractical. The alternative is to use function approximation to estimate state values and policies, but integrating function approximation into Q-learning is difficult.</p>\n<p>So in our paper, we also develop a policy-gradient/actor-critic-based robust MARL algorithm. This algorithm doesn’t provide the same convergence guarantees that Q-learning does, but it makes it easier to use function approximation.</p>\n<p>This is the MARL framework we used in our experiments. We tested our approach against two state-of-the-art systems, one that was designed for the adversarial setting and one that wasn’t, on a range of standard MARL tasks: cooperative navigation, keep-away, physical deception, and the predator-prey environments. In settings with realistic degrees of uncertainty, our approach outperformed the others across the board.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Sahika_Genchttpswwwamazonscienceauthorsahikagenc_49\\"></a><strong><a href=\\"https://www.amazon.science/author/sahika-genc\\" target=\\"_blank\\">Sahika Genc</a></strong></h4>\n<p>Sahika Genc is a principal applied scientist within Amazon AI. Her team works on reinforcement learning algorithms for Amazon SageMaker.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭