Amazon at ACL: How to teach machines to reason

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"As a senior area chair at this year’s meeting of the Association for Computational Linguistics ([ACL](https://www.amazon.science/conferences-and-events/acl-ijcnlp-2021)), Dan Roth, who recently joined Amazon Web Services’ AI organization as science lead for natural-language processing, has a good vantage on paper submissions to the conference. On this year’s program, one theme leaped out at him.\n\n“I looked at some statistics of papers in ACL, and I saw that there are dozens of papers now that have ‘reasoning’ in the title,” says Roth, who is also the Glandt Distinguished Professor in the University of Pennsylvania’s Department of Computer and Information Science. “The title ‘learning to reason’ is now becoming sort of hot. I think a lot of AI is going in that direction.”\n\nMachine reasoning, Roth says, is “the ability to make inferences, especially in ‘sparse’ situations that are unlikely to have been observed before”. The classic example is deduction: from the facts that all women are mortal and that Sappho is a woman, a machine reasoning system should infer that Sappho is mortal.\n\nRoth is well situated to review recent progress in the field, as it’s been a topic of his own research for more than 25 years. \n\n![image.png](https://dev-media.amazoncloud.cn/ab933184f46a489ab12c007c769e829c_image.png)\n\nDan Roth, science lead for natural-language processing in Amazon Web Services’ AI organization and the Glandt Distinguished Professor in the University of Pennsylvania’s Department of Computer and Information Science.\n\n“This was actually my PhD work,” he says. “Learning theory was an emerging field at that time. The questions were basically, How can we formalize learning, and what does it mean that something is learnable or not learnable? What are the computational-complexity issues in learning? I was trying to move this towards questions in reasoning, which were never studied from a theoretical perspective or computational-complexity perspective.\n\n“The assumption was that someone gives you an input — a knowledge base, for example — and you present reasoning queries to it, and in this context you want to show what can be computed. My PhD thesis was about showing that if you don't start from a knowledge base, but you jointly do learning from data and reasoning from the resulting, intermediate representation, it’s easier than doing each one of them separately. You could say that end-to-end learning today is an instantiation of this learning-to-reason process, although just conceptually. Technically, the things are very, very different.”\n\n#### **Compositionality**\n\nEven though Roth is, in a sense, a pioneer of end-to-end reasoning models, he believes that more-complex reasoning problems will require more-complex modeling.\n\n“We have a lot of hard problems that we are far from being able to address using just one model,” he says. “A lot of the problems will require thinking about things in a modular way. \n\n#### **Amazon at ACL**\n\nLearn more about Amazon's involvement at ++[ACL 2021](https://www.amazon.science/conferences-and-events/acl-ijcnlp-2021)++ — research papers, workshops and tutorials, and committee memberships.\n\n“I'll give you a simple example. I want to ask my virtual assistant, ‘Are we going to make it to dinner before the movie?’ What does this assistant need to do in order to respond to my question? It needs to know where I am now, where the movie is, how long it's going to take to get there — that's easy to do today. How long is dinner? I didn't say anything about it, but we have some idea of the typical length of dinner, maybe as a function of where dinner is. Do I need to find parking? I didn't mention parking. It's an implicit event, but we know that I have to park, maybe next to the dinner place, maybe next to the movie. I have to factor this in.\n\n“So I have to have models that know how to compute things, have some common sense — typical time of dinner, typical time of finding parking, driving between these places. And then I need a model that knows how to put this together. It's not going to be the same model, because I'm not going to train on each question. Many of the problems that we want to address are like that, where there's modularity, and we will never be able to move forward without realizing that there is modularity.”\n\n#### **Symbolic reasoning**\n\nMoreover, Roth says, the systems that integrate these separate modules will almost certainly need to use symbolic reasoning, or rule-based manipulation of symbolic representations.\n\n“The growth and the excitement around neural networks has left symbols behind,” Roth says. “Some people think that symbols are an evil invention of the old AI people. But symbols were invented because they’re useful, necessary abstractions. And also, explanations are symbolic, right? When you ask me, ‘Why did you decide this?’ or ‘Why is this implied by that?’, I need to explain it to you, and I need to use symbols when I do this. So I think we are beginning to explore this interesting space between models that are continuous, if you like, and interactions that are largely symbolic.\n\n“I'll give you an example. I've worked a lot on reasoning about time, as expressed in natural-language text. If you want to reason about events, you have to use the fact — and people do it all the time — that time is transitive. If A happens before B, and B happens before C, then A happens before C. This will never be written explicitly. So we kind of tell our models ‘Time is transitive’, and we can show that this helps a lot.”\n\nThe transitivity of time, however, is something that can be represented in the architecture of a neural network. That won’t always be the case, Roth explains.\n\n#### **Some people think that symbols are an evil invention of the old AI people. But symbols were invented because they’re useful, necessary abstractions**\nDan Roth\n\n“There are some cases where only in postprocessing are you aware of some declarative constraints,” Roth says. “Once you evaluate your model, once you decode, once you make the decision — only then do you want to impose a declarative constraint. Sometimes there are constraints that I was unaware of while I was training the model: the model is fixed, I trained it yesterday, but now I'm using it in a given situation where I'm aware of a constraint, and I want to be able to impose it. And there is very interesting theoretical work that people are doing now on trying to understand the advantages and disadvantage of these two paradigms — when which one is better. But the fact of the matter is that we need both.”\n\n“In the last five years, deep neural networks have had a huge impact, especially in the context of natural language,” Roth adds. “There's a lot of excitement, for good reason. But sooner or later, people get to the realization that that's not sufficient. I think today, more and more people are beginning to think about reasoning problems and the need to decompose and compose to address them.”\n\nABOUT THE AUTHOR\n\n#### **[Larry Hardesty](https://www.amazon.science/author/larry-hardesty)**\n\nLarry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.","render":"<p>As a senior area chair at this year’s meeting of the Association for Computational Linguistics (<a href=\\"https://www.amazon.science/conferences-and-events/acl-ijcnlp-2021\\" target=\\"_blank\\">ACL</a>), Dan Roth, who recently joined Amazon Web Services’ AI organization as science lead for natural-language processing, has a good vantage on paper submissions to the conference. On this year’s program, one theme leaped out at him.</p>\\n<p>“I looked at some statistics of papers in ACL, and I saw that there are dozens of papers now that have ‘reasoning’ in the title,” says Roth, who is also the Glandt Distinguished Professor in the University of Pennsylvania’s Department of Computer and Information Science. “The title ‘learning to reason’ is now becoming sort of hot. I think a lot of AI is going in that direction.”</p>\n<p>Machine reasoning, Roth says, is “the ability to make inferences, especially in ‘sparse’ situations that are unlikely to have been observed before”. The classic example is deduction: from the facts that all women are mortal and that Sappho is a woman, a machine reasoning system should infer that Sappho is mortal.</p>\n<p>Roth is well situated to review recent progress in the field, as it’s been a topic of his own research for more than 25 years.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/ab933184f46a489ab12c007c769e829c_image.png\\" alt=\\"image.png\\" /></p>\n<p>Dan Roth, science lead for natural-language processing in Amazon Web Services’ AI organization and the Glandt Distinguished Professor in the University of Pennsylvania’s Department of Computer and Information Science.</p>\n<p>“This was actually my PhD work,” he says. “Learning theory was an emerging field at that time. The questions were basically, How can we formalize learning, and what does it mean that something is learnable or not learnable? What are the computational-complexity issues in learning? I was trying to move this towards questions in reasoning, which were never studied from a theoretical perspective or computational-complexity perspective.</p>\n<p>“The assumption was that someone gives you an input — a knowledge base, for example — and you present reasoning queries to it, and in this context you want to show what can be computed. My PhD thesis was about showing that if you don’t start from a knowledge base, but you jointly do learning from data and reasoning from the resulting, intermediate representation, it’s easier than doing each one of them separately. You could say that end-to-end learning today is an instantiation of this learning-to-reason process, although just conceptually. Technically, the things are very, very different.”</p>\n<h4><a id=\\"Compositionality_16\\"></a><strong>Compositionality</strong></h4>\\n<p>Even though Roth is, in a sense, a pioneer of end-to-end reasoning models, he believes that more-complex reasoning problems will require more-complex modeling.</p>\n<p>“We have a lot of hard problems that we are far from being able to address using just one model,” he says. “A lot of the problems will require thinking about things in a modular way.</p>\n<h4><a id=\\"Amazon_at_ACL_22\\"></a><strong>Amazon at ACL</strong></h4>\\n<p>Learn more about Amazon’s involvement at <ins><a href=\\"https://www.amazon.science/conferences-and-events/acl-ijcnlp-2021\\" target=\\"_blank\\">ACL 2021</a></ins> — research papers, workshops and tutorials, and committee memberships.</p>\n<p>“I’ll give you a simple example. I want to ask my virtual assistant, ‘Are we going to make it to dinner before the movie?’ What does this assistant need to do in order to respond to my question? It needs to know where I am now, where the movie is, how long it’s going to take to get there — that’s easy to do today. How long is dinner? I didn’t say anything about it, but we have some idea of the typical length of dinner, maybe as a function of where dinner is. Do I need to find parking? I didn’t mention parking. It’s an implicit event, but we know that I have to park, maybe next to the dinner place, maybe next to the movie. I have to factor this in.</p>\n<p>“So I have to have models that know how to compute things, have some common sense — typical time of dinner, typical time of finding parking, driving between these places. And then I need a model that knows how to put this together. It’s not going to be the same model, because I’m not going to train on each question. Many of the problems that we want to address are like that, where there’s modularity, and we will never be able to move forward without realizing that there is modularity.”</p>\n<h4><a id=\\"Symbolic_reasoning_30\\"></a><strong>Symbolic reasoning</strong></h4>\\n<p>Moreover, Roth says, the systems that integrate these separate modules will almost certainly need to use symbolic reasoning, or rule-based manipulation of symbolic representations.</p>\n<p>“The growth and the excitement around neural networks has left symbols behind,” Roth says. “Some people think that symbols are an evil invention of the old AI people. But symbols were invented because they’re useful, necessary abstractions. And also, explanations are symbolic, right? When you ask me, ‘Why did you decide this?’ or ‘Why is this implied by that?’, I need to explain it to you, and I need to use symbols when I do this. So I think we are beginning to explore this interesting space between models that are continuous, if you like, and interactions that are largely symbolic.</p>\n<p>“I’ll give you an example. I’ve worked a lot on reasoning about time, as expressed in natural-language text. If you want to reason about events, you have to use the fact — and people do it all the time — that time is transitive. If A happens before B, and B happens before C, then A happens before C. This will never be written explicitly. So we kind of tell our models ‘Time is transitive’, and we can show that this helps a lot.”</p>\n<p>The transitivity of time, however, is something that can be represented in the architecture of a neural network. That won’t always be the case, Roth explains.</p>\n<h4><a id=\\"Some_people_think_that_symbols_are_an_evil_invention_of_the_old_AI_people_But_symbols_were_invented_because_theyre_useful_necessary_abstractions_40\\"></a><strong>Some people think that symbols are an evil invention of the old AI people. But symbols were invented because they’re useful, necessary abstractions</strong></h4>\\n<p>Dan Roth</p>\n<p>“There are some cases where only in postprocessing are you aware of some declarative constraints,” Roth says. “Once you evaluate your model, once you decode, once you make the decision — only then do you want to impose a declarative constraint. Sometimes there are constraints that I was unaware of while I was training the model: the model is fixed, I trained it yesterday, but now I’m using it in a given situation where I’m aware of a constraint, and I want to be able to impose it. And there is very interesting theoretical work that people are doing now on trying to understand the advantages and disadvantage of these two paradigms — when which one is better. But the fact of the matter is that we need both.”</p>\n<p>“In the last five years, deep neural networks have had a huge impact, especially in the context of natural language,” Roth adds. “There’s a lot of excitement, for good reason. But sooner or later, people get to the realization that that’s not sufficient. I think today, more and more people are beginning to think about reasoning problems and the need to decompose and compose to address them.”</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Larry_Hardestyhttpswwwamazonscienceauthorlarryhardesty_49\\"></a><strong><a href=\\"https://www.amazon.science/author/larry-hardesty\\" target=\\"_blank\\">Larry Hardesty</a></strong></h4>\n<p>Larry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭