New dataset for training household robots to follow human commands

自然语言处理
机器人
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Through smart-home devices and systems, customers can already instruct Alexa to do things like open garage doors, turn lights on and off, or start the dishwasher. But we envision a future in which AI assistants can help with a broader range of more-complex tasks, including performing day-to-day chores, such as preparing breakfast. \n\nTo accomplish such tasks, AI assistants will need to interact with objects in the world, understand natural-language instructions to complete tasks, and engage in conversations with users to clarify ambiguous instructions.\n\nTo aid in the development of such AI assistants, we have ++[publicly released a new dataset](https://github.com/alexa/teach)++ called TEACh, for ++[Task-driven Embodied Agents that Chat](https://www.amazon.science/publications/teach-task-driven-embodied-agents-that-chat)++. TEACh contains over 3,000 simulated dialogues, in which a human instructs a robot in the completion of household tasks, and associated visual data from a simulated environment.\n\n#### **Amazon launches new Alexa Prize SimBot Challenge**\n\nToday, Amazon also announced the ++[Alexa Prize SimBot Challenge](https://www.amazon.science/academic-engagements/amazon-launches-new-alexa-prize-simbot-challenge)++, a competition focused on helping develop next-generation virtual assistants that will assist humans in completing real-world tasks. One of the TEACh benchmarks will be the basis of the challenge's public-benchmark phase.\n\nFor each dialogue, the roles of human and robot were played by paid crowd workers. The worker playing the robot did not know what task needed to be completed but depended entirely on the other worker’s instructions. Each worker received a visual feed that reflected a first-person point of view on the simulated environment. Both workers could move freely through the environment, but only the robot could interact with objects. The workers needed to collaborate and communicate to successfully complete tasks.\n\nThe simulated home environment is based on the ++[AI2-THOR simulator](https://ai2thor.allenai.org/)++, which includes 30 variations on each of four types of rooms: kitchens, living rooms, bedrooms, and bathrooms. Each gameplay session in the dataset consists of the initial and final states of the simulated environment, a task defined in terms of object properties to be satisfied, and a sequence of actions taken by the crowd workers. \n\nThose actions could include movement through the environment, interactions with objects (the robot can pick and place objects, open and close cabinets, drawers, and appliances, toggle lights on and off, operate appliances and faucets, slice objects, and pour liquid out of one object into another).\n\n![image.png](https://dev-media.amazoncloud.cn/fb7186bc79754315b96c42fa7a7fd7ba_image.png)\n\nA sample gameplay session for the Prepare Breakfast task, where the robot has to make coffee and a sandwich with lettuce. The user offers step-by-step instructions but occasionally provides the next step — for example, slicing bread — before the robot has finished the previous step. Occasionally, the user offers help too late, as when the robot finds the knife by searching for it because the user does not provide its location.\n\n#### **Data collection**\n\nTo collect the dataset, we first developed a task definition language that let us specify what properties needed to be satisfied in the environment for a task to be considered complete. For example, to check that coffee is made, we confirm that there exists a clean mug in the environment that is filled with coffee. We implement a framework to check the AI2-THOR simulator for the status of different tasks, and we provide natural-language prompts for the steps remaining to complete a task. \n\nWe then pair two crowd workers using a web interface and place them in the same simulated room. The user can see the prompts describing what steps need to be completed and uses chat to communicate them to the robot. Additionally, the user can determine where important objects are by either clicking on the steps or searching the virtual space, so that, for example, the robot does not have to open every drawer in the kitchen to find a knife hidden in one of them. \n\n![image.png](https://dev-media.amazoncloud.cn/d8502ec67c9044c784a6a83530a768cc_image.png)\n\nAn example task definition from the dataset (left) and the views of the simulated environment (right) that let the crowd worker playing the role of the user monitor progress toward task completion.\n\nWe place no constraints on the chat interface used by the annotators, and as a result, users provide instructions with different levels of granularity. One might say, “First get a clean mug and prepare coffee,” while another might break this up into several steps — “Grab the dirty mug out of the fridge”, “go wash it in the sink”, “place mug in coffee maker” — waiting for the robot to complete each step before providing the next one.\n\nA user might provide instructions too early — for example, asking the robot to slice bread before it has finished preparing coffee — or too late — telling the robot where the knife is only after it has found it and sliced the bread with it. The user might also help the robot correct mistakes or get unstuck — for example, asking the robot to clear out the sink before placing a new object in it.\n\nIn total, we collected 4,365 sessions, of which 3,320 were successful. Of those, we were able to successfully replay 3,047 on the AI2-THOR simulator, meaning that providing the same sequence of actions resulted in the same simulator state. TEACh sessions span all 30 kitchens in the simulator and most of the living rooms, bedrooms, and bathrooms. The successful TEACh sessions span 12 task types and consist of more than 45,000 utterances, with an average of 8.40 user and 5.25 robot utterances per session. \n\n#### **Benchmarks**\n\nWe propose three benchmark tasks that machine learning models can be trained to perform using our dataset: execution from dialogue history (EDH), trajectory from dialogue (TfD), and two-agent task completion (TATC). \n\nIn the EDH benchmark, the model receives some dialogue history, previous actions taken by the robot, and the corresponding first-person observations from a collected gameplay session. The model is expected to predict the next few actions the robot will take, receiving a first-person observation after each action. The model is judged on whether its actions yield the same result that the player’s actions did in the original gameplay session.\n\nThe EDH benchmark will also be the basis for the public-benchmark phase of ++[the Alexa Prize SimBot Challenge](https://www.amazon.science/academic-engagements/amazon-launches-new-alexa-prize-simbot-challenge)++, which we also announced today. The SimBot Challenge is focused on helping advance development of next-generation virtual assistants that will assist humans in completing real-world tasks by continuously learning and gaining the ability to perform commonsense reasoning.\n\nIn the TfD benchmark, a model receives the complete dialogue history and has to predict all the actions taken by the robot, receiving a first-person observation after each action. \n\nIn the TATC benchmark, the designer needs to build two models, one for the user and one for the robot. The user model receives the same task information that the human worker did, as well as the state of the environment. It has to communicate with the robot model, which takes actions in the environment to complete tasks. \n\nWe include baseline model performance on these benchmarks in a ++[paper we’ve published to the arXiv](https://arxiv.org/abs/2110.00534)++, which we hope will be used as a reference for future work by other research groups. \n\nFor the EDH and TfD benchmarks, we created “validation-seen” and “test-seen” splits, which evaluate the ability of models to generalize to new dialogues and execution paths in the rooms used for training, and “validation-unseen” and “test-unseen” splits, which evaluate the ability of models to generalize to dialogues and execution paths in rooms never previously seen. These splits are designed to enable easy model transfer to and from a related dataset, ++[ALFRED](https://arxiv.org/abs/1912.01734)++, which also uses floorplans from AI2-THOR and splits the data similarly.\n\n**Acknowledgements**: This project came together through the efforts and support of several people on the Alexa AI team. We would like to thank Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, Dilek Hakkani-Tür, Ron Rezac, Shui Hu, Lucy Hu, Hangjie Shi, Nicole Chartier, Savanna Stiff, Ana Sanchez, Ben Kelk, Joel Sachar, Govind Thattai, Gaurav Sukhatme, Joel Chengottusseriyil, Tony Bissell, Qiaozi Gao, Kaixiang Lin, Karthik Gopalakrishnan, Alexandros Papangelis, Yang Liu, Mahdi Namazifar, Behnam Hedayatnia, Di Jin, and Seokhwan Kim for their contributions to the project. \n\nABOUT THE AUTHOR\n\n#### **[Aishwarya Padmakumar](https://www.amazon.science/author/aishwarya-padmakumar)**\n\nAishwarya Padmakumar is an applied scientist in the Alexa AI organization.\n\n\n\n\n\n","render":"<p>Through smart-home devices and systems, customers can already instruct Alexa to do things like open garage doors, turn lights on and off, or start the dishwasher. But we envision a future in which AI assistants can help with a broader range of more-complex tasks, including performing day-to-day chores, such as preparing breakfast.</p>\n<p>To accomplish such tasks, AI assistants will need to interact with objects in the world, understand natural-language instructions to complete tasks, and engage in conversations with users to clarify ambiguous instructions.</p>\n<p>To aid in the development of such AI assistants, we have <ins><a href=\"https://github.com/alexa/teach\" target=\"_blank\">publicly released a new dataset</a></ins> called TEACh, for <ins><a href=\"https://www.amazon.science/publications/teach-task-driven-embodied-agents-that-chat\" target=\"_blank\">Task-driven Embodied Agents that Chat</a></ins>. TEACh contains over 3,000 simulated dialogues, in which a human instructs a robot in the completion of household tasks, and associated visual data from a simulated environment.</p>\n<h4><a id=\"Amazon_launches_new_Alexa_Prize_SimBot_Challenge_6\"></a><strong>Amazon launches new Alexa Prize SimBot Challenge</strong></h4>\n<p>Today, Amazon also announced the <ins><a href=\"https://www.amazon.science/academic-engagements/amazon-launches-new-alexa-prize-simbot-challenge\" target=\"_blank\">Alexa Prize SimBot Challenge</a></ins>, a competition focused on helping develop next-generation virtual assistants that will assist humans in completing real-world tasks. One of the TEACh benchmarks will be the basis of the challenge’s public-benchmark phase.</p>\n<p>For each dialogue, the roles of human and robot were played by paid crowd workers. The worker playing the robot did not know what task needed to be completed but depended entirely on the other worker’s instructions. Each worker received a visual feed that reflected a first-person point of view on the simulated environment. Both workers could move freely through the environment, but only the robot could interact with objects. The workers needed to collaborate and communicate to successfully complete tasks.</p>\n<p>The simulated home environment is based on the <ins><a href=\"https://ai2thor.allenai.org/\" target=\"_blank\">AI2-THOR simulator</a></ins>, which includes 30 variations on each of four types of rooms: kitchens, living rooms, bedrooms, and bathrooms. Each gameplay session in the dataset consists of the initial and final states of the simulated environment, a task defined in terms of object properties to be satisfied, and a sequence of actions taken by the crowd workers.</p>\n<p>Those actions could include movement through the environment, interactions with objects (the robot can pick and place objects, open and close cabinets, drawers, and appliances, toggle lights on and off, operate appliances and faucets, slice objects, and pour liquid out of one object into another).</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/fb7186bc79754315b96c42fa7a7fd7ba_image.png\" alt=\"image.png\" /></p>\n<p>A sample gameplay session for the Prepare Breakfast task, where the robot has to make coffee and a sandwich with lettuce. The user offers step-by-step instructions but occasionally provides the next step — for example, slicing bread — before the robot has finished the previous step. Occasionally, the user offers help too late, as when the robot finds the knife by searching for it because the user does not provide its location.</p>\n<h4><a id=\"Data_collection_20\"></a><strong>Data collection</strong></h4>\n<p>To collect the dataset, we first developed a task definition language that let us specify what properties needed to be satisfied in the environment for a task to be considered complete. For example, to check that coffee is made, we confirm that there exists a clean mug in the environment that is filled with coffee. We implement a framework to check the AI2-THOR simulator for the status of different tasks, and we provide natural-language prompts for the steps remaining to complete a task.</p>\n<p>We then pair two crowd workers using a web interface and place them in the same simulated room. The user can see the prompts describing what steps need to be completed and uses chat to communicate them to the robot. Additionally, the user can determine where important objects are by either clicking on the steps or searching the virtual space, so that, for example, the robot does not have to open every drawer in the kitchen to find a knife hidden in one of them.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/d8502ec67c9044c784a6a83530a768cc_image.png\" alt=\"image.png\" /></p>\n<p>An example task definition from the dataset (left) and the views of the simulated environment (right) that let the crowd worker playing the role of the user monitor progress toward task completion.</p>\n<p>We place no constraints on the chat interface used by the annotators, and as a result, users provide instructions with different levels of granularity. One might say, “First get a clean mug and prepare coffee,” while another might break this up into several steps — “Grab the dirty mug out of the fridge”, “go wash it in the sink”, “place mug in coffee maker” — waiting for the robot to complete each step before providing the next one.</p>\n<p>A user might provide instructions too early — for example, asking the robot to slice bread before it has finished preparing coffee — or too late — telling the robot where the knife is only after it has found it and sliced the bread with it. The user might also help the robot correct mistakes or get unstuck — for example, asking the robot to clear out the sink before placing a new object in it.</p>\n<p>In total, we collected 4,365 sessions, of which 3,320 were successful. Of those, we were able to successfully replay 3,047 on the AI2-THOR simulator, meaning that providing the same sequence of actions resulted in the same simulator state. TEACh sessions span all 30 kitchens in the simulator and most of the living rooms, bedrooms, and bathrooms. The successful TEACh sessions span 12 task types and consist of more than 45,000 utterances, with an average of 8.40 user and 5.25 robot utterances per session.</p>\n<h4><a id=\"Benchmarks_36\"></a><strong>Benchmarks</strong></h4>\n<p>We propose three benchmark tasks that machine learning models can be trained to perform using our dataset: execution from dialogue history (EDH), trajectory from dialogue (TfD), and two-agent task completion (TATC).</p>\n<p>In the EDH benchmark, the model receives some dialogue history, previous actions taken by the robot, and the corresponding first-person observations from a collected gameplay session. The model is expected to predict the next few actions the robot will take, receiving a first-person observation after each action. The model is judged on whether its actions yield the same result that the player’s actions did in the original gameplay session.</p>\n<p>The EDH benchmark will also be the basis for the public-benchmark phase of <ins><a href=\"https://www.amazon.science/academic-engagements/amazon-launches-new-alexa-prize-simbot-challenge\" target=\"_blank\">the Alexa Prize SimBot Challenge</a></ins>, which we also announced today. The SimBot Challenge is focused on helping advance development of next-generation virtual assistants that will assist humans in completing real-world tasks by continuously learning and gaining the ability to perform commonsense reasoning.</p>\n<p>In the TfD benchmark, a model receives the complete dialogue history and has to predict all the actions taken by the robot, receiving a first-person observation after each action.</p>\n<p>In the TATC benchmark, the designer needs to build two models, one for the user and one for the robot. The user model receives the same task information that the human worker did, as well as the state of the environment. It has to communicate with the robot model, which takes actions in the environment to complete tasks.</p>\n<p>We include baseline model performance on these benchmarks in a <ins><a href=\"https://arxiv.org/abs/2110.00534\" target=\"_blank\">paper we’ve published to the arXiv</a></ins>, which we hope will be used as a reference for future work by other research groups.</p>\n<p>For the EDH and TfD benchmarks, we created “validation-seen” and “test-seen” splits, which evaluate the ability of models to generalize to new dialogues and execution paths in the rooms used for training, and “validation-unseen” and “test-unseen” splits, which evaluate the ability of models to generalize to dialogues and execution paths in rooms never previously seen. These splits are designed to enable easy model transfer to and from a related dataset, <ins><a href=\"https://arxiv.org/abs/1912.01734\" target=\"_blank\">ALFRED</a></ins>, which also uses floorplans from AI2-THOR and splits the data similarly.</p>\n<p><strong>Acknowledgements</strong>: This project came together through the efforts and support of several people on the Alexa AI team. We would like to thank Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, Dilek Hakkani-Tür, Ron Rezac, Shui Hu, Lucy Hu, Hangjie Shi, Nicole Chartier, Savanna Stiff, Ana Sanchez, Ben Kelk, Joel Sachar, Govind Thattai, Gaurav Sukhatme, Joel Chengottusseriyil, Tony Bissell, Qiaozi Gao, Kaixiang Lin, Karthik Gopalakrishnan, Alexandros Papangelis, Yang Liu, Mahdi Namazifar, Behnam Hedayatnia, Di Jin, and Seokhwan Kim for their contributions to the project.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Aishwarya_Padmakumarhttpswwwamazonscienceauthoraishwaryapadmakumar_56\"></a><strong><a href=\"https://www.amazon.science/author/aishwarya-padmakumar\" target=\"_blank\">Aishwarya Padmakumar</a></strong></h4>\n<p>Aishwarya Padmakumar is an applied scientist in the Alexa AI organization.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭