New method improves knowledge-graph-based question answering

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Question answering is a popular task in natural-language processing, where models are given questions such as “What city is the Mona Lisa in?” and trained to predict correct answers, such as “Paris”. \n\nOne way to train question-answering models is to use a knowledge graph, which stores facts about the world in a structured format. Historically, knowledge-graph-based question-answering systems required separate semantic-parsing and entity resolution models, which were costly to train and maintain. But since 2020, the field has been moving toward differentiable knowledge graphs, which allow a single end-to-end model to take questions as inputs and directly output answers.\n\nAt this year’s Conference on Empirical Methods in Natural Language Processing (++[EMNLP](https://www.amazon.science/conferences-and-events/emnlp-2021)++), we presented two extensions to end-to-end question answering using differentiable knowledge graphs.\n\nIn “++[End-to-end entity resolution and question answering using differentiable knowledge graphs](https://www.amazon.science/publications/end-to-end-entity-resolution-and-question-answering-using-differentiable-knowledge-graphs)++”, we explain how to integrate entity resolution into a knowledge-graph-based question-answering model, so that it will be performed automatically. This is a potentially labor-saving innovation, as some existing approaches require hand annotation of entities.\n\nAnd in “++[Expanding end-to-end question answering on differentiable knowledge graphs with intersection](https://www.amazon.science/publications/expanding-end-to-end-question-answering-on-differentiable-knowledge-graphs-with-intersection)++”, we explain how to handle queries that involve multiple entities. In experiments involving two different datasets, our approach improved performance on multi-entity queries by 14% and 19%.\n\n#### **Traditional approaches**\n\nIn a typical knowledge graph, the nodes represent entities, and the edges between nodes represent relationships between entities. For example, a knowledge graph could contain the fact “Mona Lisa | exhibited at | Louvre Museum”, which links the entities “Mona Lisa” and “Louvre Museum” with the relationship “exhibited at”. Similarly, the graph could link the entities “Louvre Museum” and “Paris” with the relationship “located in”. A model can learn to use the facts in the knowledge graph to get to answer questions.\n\n![下载 3.gif](https://dev-media.amazoncloud.cn/063297de43be43adaf560950513ba4e6_%E4%B8%8B%E8%BD%BD%20%283%29.gif)\n\nQuestion answering through traversal of links in a knowledge graph.\n\nTraditional approaches to knowledge-graph-based question answering use a pipeline of models. First, the question goes into a semantic-parsing model, which is trained to predict queries. Queries can be thought of as instructions of what to do in the knowledge graph — for example, “find out where the ‘Mona Lisa’ is exhibited, then look up what city it’s in.” \n\nNext, the question goes into an entity resolution model, which links parts of the sentence, like “Mona Lisa”, to IDs in the knowledge graph, like Q12418.\n\n![下载 4.gif](https://dev-media.amazoncloud.cn/e1174c4f107743b3a685877660fa85dc_%E4%B8%8B%E8%BD%BD%20%284%29.gif)\n\nThe pipelined approach to knowledge-graph-based question answering.\n\nWhile this pipeline approach works, it does have some flaws. Each model is trained independently, so it has to be evaluated and updated separately, and each model requires annotations for training, which are time-consuming and expensive to collect. \n\n#### **End-to-end question answering**\n\nEnd-to-end question answering is a way to rectify these flaws. During training, an end-to-end question answering model is given the question and the answer but no instructions about what to do in the knowledge graph. Instead, the model learns the instructions based on the correct answer. \n\nIn 2020, ++[Cohen et al](https://arxiv.org/abs/2002.06115)++. proposed a way to perform end-to-end question answering using differentiable knowledge graphs, which represent knowledge graphs as tensors and queries as differentiable mathematical operations. This allows for fully differentiable training, so that the answer alone provides a training signal to update all parts of the model.\n\n![下载.gif](https://dev-media.amazoncloud.cn/d361611016274874999dae280980c13b_%E4%B8%8B%E8%BD%BD.gif)\n\nEnd-to-end question answering with a differentiable knowledge graph.\n\n#### **“End-to-end entity resolution and question answering using differentiable knowledge graphs”**\n\nIn our first paper, we extend end-to-end models to include the training of an entity resolution component. Previous work left the task of entity resolution (linking “Mona Lisa” to Q12418) out of the scope of the model, relying instead on a separate entity resolution model or hand annotation of entities. \n\nWe propose a way to train entity resolution as part of the question-answering model. To do this, we start with a baseline model similar to the implementation by Cohen et al., which has an encoder-decoder structure and an attention mechanism that takes in a question and returns predicted answers with probabilities.\n\nWe add entity resolution to this baseline model by first introducing a span detection component. This identifies all the possible parts of the sentence (spans) that could refer to an entity. For example, in the question “Who wrote films starring Tom Hanks?”, there are multiple spans, such as “films” or “Tom”, and we want the model to learn to give a higher score to the correct span, “Tom Hanks”. \n\n![image.png](https://dev-media.amazoncloud.cn/c7f7067dd2d04ca4b6b7866f2f686b24_image.png)\n\nAn end-to-end model that jointly learns knowledge-graph-based question answering and entity resolution.\n\nThen for each of the identified spans, our model ranks all the possible entities in the knowledge graph that the span could refer to. For example, “Tom Hanks” is also the name of a seismologist and a theologian, but we want the model to learn to give the actor a higher score.\n\nSpan detection and entity resolution happen jointly in a new entity resolution component, which returns possible entities with scores. Finding the entities in the knowledge graph and following the paths from the inference component yields the predicted answers. We believe that this joint modeling is responsible for our method’s increase in efficiency.\n\nIn our experiments, we use two English question-answering datasets. We find that although our ER and E2E models perform slightly worse than the baseline, they do come very close, with differences of about 7% and 5% on the two datasets. \n\nThis is an impressive result, since the baseline model uses hand-annotated entities from the datasets, while our model is learning entity resolution jointly with question answering. With these findings, we demonstrate the feasibility of learning to perform entity resolution and multihop inference in a single end-to-end model.\n\n#### **“Expanding end-to-end question answering on differentiable knowledge graphs with intersection”**\n\nIn our second paper, we extend end-to-end models to handle more complex questions with multiple entities. Take for example the question “Who did Natalie Portman play in Star Wars?”. This question has two entities, Natalie Portman and Star Wars. \n\nPrevious end-to-end models were trained to follow paths originating with one entity in a knowledge graph. However, this is not enough to answer questions with multiple entities. If we started at “Natalie Portman” and found all the roles she played, we would get roles that were not in Star Wars. If we started at Star Wars and found all the characters, we would get characters that Natalie Portman didn’t play. What we need is an intersection of the characters played by Natalie Portman and the characters in Star Wars. \n\nTo handle questions with multiple entities, we expand an end-to-end model with a new operation: intersection. For each entity in the question, the model follows paths from the entity independently and arrives at an intermediate answer. Then the model performs intersection, which we implemented as the element-wise minimum of two vectors, to identify which entities the intermediate answers have in common. Only entities that appear in all intermediate answers are returned in the final answer. \n\n![image.png](https://dev-media.amazoncloud.cn/a0ca763b7af340e1b3d4b1a1bf10b80d_image.png)\n\nAn end-to-end knowledge-graph-based question-answering model for queries with multiple entities.\n\nIn our experiments, we use two English question-answering datasets. Our results show that introducing intersection improves performance over the baseline by 3.7% on one and 8.9% on the other. \n\nMore importantly, we see that improved performance comes from better handling of questions with multiple entities, where the intersection model surpasses the baseline by over 14% on one dataset and by 19% on the other.\n\nIn future work, we plan to continue developing end-to-end models by improving entity resolution, so it’s competitive with hand annotation of entities; integrating entity resolution with intersection; and learning to handle more-complex operations, such as maximums/minimums and counts.\n\n#### **Amazon at EMNLP**\n\nRead more about ++[Amazon's presence at EMNLP](https://www.amazon.science/conferences-and-events/emnlp-2021)++, including papers, membership in the organizing committee, and workshop and tutorial involvement.\n\nABOUT THE AUTHOR\n\n#### **[Priyanka Sen](https://www.amazon.science/author/priyanka-sen)**\n\nPriyanka Sen is a computational linguist in the Alexa AI organization.\n\n\n\n\n\n\n\n\n","render":"<p>Question answering is a popular task in natural-language processing, where models are given questions such as “What city is the Mona Lisa in?” and trained to predict correct answers, such as “Paris”.</p>\n<p>One way to train question-answering models is to use a knowledge graph, which stores facts about the world in a structured format. Historically, knowledge-graph-based question-answering systems required separate semantic-parsing and entity resolution models, which were costly to train and maintain. But since 2020, the field has been moving toward differentiable knowledge graphs, which allow a single end-to-end model to take questions as inputs and directly output answers.</p>\n<p>At this year’s Conference on Empirical Methods in Natural Language Processing (<ins><a href=\\"https://www.amazon.science/conferences-and-events/emnlp-2021\\" target=\\"_blank\\">EMNLP</a></ins>), we presented two extensions to end-to-end question answering using differentiable knowledge graphs.</p>\n<p>In “<ins><a href=\\"https://www.amazon.science/publications/end-to-end-entity-resolution-and-question-answering-using-differentiable-knowledge-graphs\\" target=\\"_blank\\">End-to-end entity resolution and question answering using differentiable knowledge graphs</a></ins>”, we explain how to integrate entity resolution into a knowledge-graph-based question-answering model, so that it will be performed automatically. This is a potentially labor-saving innovation, as some existing approaches require hand annotation of entities.</p>\n<p>And in “<ins><a href=\\"https://www.amazon.science/publications/expanding-end-to-end-question-answering-on-differentiable-knowledge-graphs-with-intersection\\" target=\\"_blank\\">Expanding end-to-end question answering on differentiable knowledge graphs with intersection</a></ins>”, we explain how to handle queries that involve multiple entities. In experiments involving two different datasets, our approach improved performance on multi-entity queries by 14% and 19%.</p>\n<h4><a id=\\"Traditional_approaches_10\\"></a><strong>Traditional approaches</strong></h4>\\n<p>In a typical knowledge graph, the nodes represent entities, and the edges between nodes represent relationships between entities. For example, a knowledge graph could contain the fact “Mona Lisa | exhibited at | Louvre Museum”, which links the entities “Mona Lisa” and “Louvre Museum” with the relationship “exhibited at”. Similarly, the graph could link the entities “Louvre Museum” and “Paris” with the relationship “located in”. A model can learn to use the facts in the knowledge graph to get to answer questions.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/063297de43be43adaf560950513ba4e6_%E4%B8%8B%E8%BD%BD%20%283%29.gif\\" alt=\\"下载 3.gif\\" /></p>\n<p>Question answering through traversal of links in a knowledge graph.</p>\n<p>Traditional approaches to knowledge-graph-based question answering use a pipeline of models. First, the question goes into a semantic-parsing model, which is trained to predict queries. Queries can be thought of as instructions of what to do in the knowledge graph — for example, “find out where the ‘Mona Lisa’ is exhibited, then look up what city it’s in.”</p>\n<p>Next, the question goes into an entity resolution model, which links parts of the sentence, like “Mona Lisa”, to IDs in the knowledge graph, like Q12418.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/e1174c4f107743b3a685877660fa85dc_%E4%B8%8B%E8%BD%BD%20%284%29.gif\\" alt=\\"下载 4.gif\\" /></p>\n<p>The pipelined approach to knowledge-graph-based question answering.</p>\n<p>While this pipeline approach works, it does have some flaws. Each model is trained independently, so it has to be evaluated and updated separately, and each model requires annotations for training, which are time-consuming and expensive to collect.</p>\n<h4><a id=\\"Endtoend_question_answering_28\\"></a><strong>End-to-end question answering</strong></h4>\\n<p>End-to-end question answering is a way to rectify these flaws. During training, an end-to-end question answering model is given the question and the answer but no instructions about what to do in the knowledge graph. Instead, the model learns the instructions based on the correct answer.</p>\n<p>In 2020, <ins><a href=\\"https://arxiv.org/abs/2002.06115\\" target=\\"_blank\\">Cohen et al</a></ins>. proposed a way to perform end-to-end question answering using differentiable knowledge graphs, which represent knowledge graphs as tensors and queries as differentiable mathematical operations. This allows for fully differentiable training, so that the answer alone provides a training signal to update all parts of the model.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/d361611016274874999dae280980c13b_%E4%B8%8B%E8%BD%BD.gif\\" alt=\\"下载.gif\\" /></p>\n<p>End-to-end question answering with a differentiable knowledge graph.</p>\n<h4><a id=\\"Endtoend_entity_resolution_and_question_answering_using_differentiable_knowledge_graphs_38\\"></a><strong>“End-to-end entity resolution and question answering using differentiable knowledge graphs”</strong></h4>\\n<p>In our first paper, we extend end-to-end models to include the training of an entity resolution component. Previous work left the task of entity resolution (linking “Mona Lisa” to Q12418) out of the scope of the model, relying instead on a separate entity resolution model or hand annotation of entities.</p>\n<p>We propose a way to train entity resolution as part of the question-answering model. To do this, we start with a baseline model similar to the implementation by Cohen et al., which has an encoder-decoder structure and an attention mechanism that takes in a question and returns predicted answers with probabilities.</p>\n<p>We add entity resolution to this baseline model by first introducing a span detection component. This identifies all the possible parts of the sentence (spans) that could refer to an entity. For example, in the question “Who wrote films starring Tom Hanks?”, there are multiple spans, such as “films” or “Tom”, and we want the model to learn to give a higher score to the correct span, “Tom Hanks”.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/c7f7067dd2d04ca4b6b7866f2f686b24_image.png\\" alt=\\"image.png\\" /></p>\n<p>An end-to-end model that jointly learns knowledge-graph-based question answering and entity resolution.</p>\n<p>Then for each of the identified spans, our model ranks all the possible entities in the knowledge graph that the span could refer to. For example, “Tom Hanks” is also the name of a seismologist and a theologian, but we want the model to learn to give the actor a higher score.</p>\n<p>Span detection and entity resolution happen jointly in a new entity resolution component, which returns possible entities with scores. Finding the entities in the knowledge graph and following the paths from the inference component yields the predicted answers. We believe that this joint modeling is responsible for our method’s increase in efficiency.</p>\n<p>In our experiments, we use two English question-answering datasets. We find that although our ER and E2E models perform slightly worse than the baseline, they do come very close, with differences of about 7% and 5% on the two datasets.</p>\n<p>This is an impressive result, since the baseline model uses hand-annotated entities from the datasets, while our model is learning entity resolution jointly with question answering. With these findings, we demonstrate the feasibility of learning to perform entity resolution and multihop inference in a single end-to-end model.</p>\n<h4><a id=\\"Expanding_endtoend_question_answering_on_differentiable_knowledge_graphs_with_intersection_58\\"></a><strong>“Expanding end-to-end question answering on differentiable knowledge graphs with intersection”</strong></h4>\\n<p>In our second paper, we extend end-to-end models to handle more complex questions with multiple entities. Take for example the question “Who did Natalie Portman play in Star Wars?”. This question has two entities, Natalie Portman and Star Wars.</p>\n<p>Previous end-to-end models were trained to follow paths originating with one entity in a knowledge graph. However, this is not enough to answer questions with multiple entities. If we started at “Natalie Portman” and found all the roles she played, we would get roles that were not in Star Wars. If we started at Star Wars and found all the characters, we would get characters that Natalie Portman didn’t play. What we need is an intersection of the characters played by Natalie Portman and the characters in Star Wars.</p>\n<p>To handle questions with multiple entities, we expand an end-to-end model with a new operation: intersection. For each entity in the question, the model follows paths from the entity independently and arrives at an intermediate answer. Then the model performs intersection, which we implemented as the element-wise minimum of two vectors, to identify which entities the intermediate answers have in common. Only entities that appear in all intermediate answers are returned in the final answer.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/a0ca763b7af340e1b3d4b1a1bf10b80d_image.png\\" alt=\\"image.png\\" /></p>\n<p>An end-to-end knowledge-graph-based question-answering model for queries with multiple entities.</p>\n<p>In our experiments, we use two English question-answering datasets. Our results show that introducing intersection improves performance over the baseline by 3.7% on one and 8.9% on the other.</p>\n<p>More importantly, we see that improved performance comes from better handling of questions with multiple entities, where the intersection model surpasses the baseline by over 14% on one dataset and by 19% on the other.</p>\n<p>In future work, we plan to continue developing end-to-end models by improving entity resolution, so it’s competitive with hand annotation of entities; integrating entity resolution with intersection; and learning to handle more-complex operations, such as maximums/minimums and counts.</p>\n<h4><a id=\\"Amazon_at_EMNLP_76\\"></a><strong>Amazon at EMNLP</strong></h4>\\n<p>Read more about <ins><a href=\\"https://www.amazon.science/conferences-and-events/emnlp-2021\\" target=\\"_blank\\">Amazon’s presence at EMNLP</a></ins>, including papers, membership in the organizing committee, and workshop and tutorial involvement.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Priyanka_Senhttpswwwamazonscienceauthorpriyankasen_82\\"></a><strong><a href=\\"https://www.amazon.science/author/priyanka-sen\\" target=\\"_blank\\">Priyanka Sen</a></strong></h4>\n<p>Priyanka Sen is a computational linguist in the Alexa AI organization.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭