{"value":"![image.png](https://dev-media.amazoncloud.cn/12ee7c8a7e9041aa80906647776bfe30_image.png)\n\nKathleen McKeown is the Henry and Gertrude Rothschild professor of computer science at Columbia University, founding director of the school’s Data Science Institute, and an Amazon Scholar.\nGLYNIS CONDON\n\nThe first [Amazon Web Services (Amazon Web Services) Machine Learning Summit](https://www.amazon.science/latest-news/amazon-web-services-to-hold-machine-learning-summit) on June 2 will bring together customers, developers, and the science community to learn about advances in the practice of machine learning (ML). The event, which is free to attend, will feature four audience-focused tracks, including Science of Machine Learning.\n\nThe science track is focused on the data science and advanced practitioner audience and will highlight the work Amazon Web Services and Amazon scientists are doing to advance machine learning. The track will comprise six sessions, each lasting 30 minutes, and a 45-minute fireside chat.\n\nAmazon Science is [featuring ](https://www.amazon.science/latest-news/3-questions-with-marzia-polito-performing-computer-vision-tasks-at-scale-with-few-shot-learning)[interviews ](https://www.amazon.science/latest-news/3-questions-with-michael-kearns-designing-socially-aware-algorithms-and-models)[with ](https://www.amazon.science/latest-news/3-questions-with-philip-resnik-analyzing-social-media-to-understand-the-risks-of-suicide)[speakers ](https://www.amazon.science/latest-news/3-questions-with-ryan-tibshirani-the-science-behind-covidcast-and-pandemic-tracking)from the Science of Machine Learning track. For the fifth edition of the series, we spoke to [Kathleen McKeown](https://www.amazon.science/working-at-amazon/amazon-scholar-kathleen-mckeown-takes-stock-of-natural-language-processing-where-we-are-and-where-were-going), Henry and Gertrude Rothschild professor of computer science at Columbia University, and founding director of the school’s Data Science Institute. Her research has been focused on text summarization, natural language generation, multi-media explanation, question-answering, and multilingual applications.\n\nMcKeown is also an [Amazon Scholar](https://www.amazon.science/scholars), an expanding group of academics who work on large-scale technical challenges for Amazon while continuing to teach and conduct research at their universities.\n\n\n#### **Q. What is the subject of your talk going to be at the ML Summit?**\n\n\nOver the last few years, my research has focused on — among other areas — the field of natural language generation. At the Amazon Web Services ML Summit, I will be talking about the need to control choices related to parameters when generating language.\n\nA program that generates language must make choices about how to express information. For example, algorithms have to determine what content to convey, what words to use, and how to order the words to form a sentence. Deep learning can generate remarkably fluent text. However, it is often not possible to control how these choices are made based on the input goals specified.\n\nA primary concern for controllable language generation is that the output should be faithful to the input. Neural language generation approaches are known to hallucinate content, resulting in generated text that conveys information that did not appear in the input. Factual inconsistency resulting from model hallucinations can occur at either the entity or the relation level.\n\nIn the former case, a model-generated summary may contain entities that are completely absent in the source document. We also have other kind of hallucinations that are more difficult to spot: relational inconsistencies, where the entities exist in the source document, but the relations between these entities are absent.\n\nCurrent language generation techniques that rely on large-scale language models can generate fluent text. However, they find it difficult to control word choice or word order in ways that guarantee that the speaker intent comes across accurately. During my talk, I will detail techniques that allow models to render speaker intent faithfully.\n\n\n#### **Q. Why is this topic especially relevant within the science community today?**\n\nThere are many different use cases for natural language generation. These include [generating language from data](https://www.amazon.science/blog/automatically-generating-text-from-structured-data), summarizing text, and generating accurate responses in dialogue and machine translation.\n\nIn all these use cases, hallucination has been a problem. A system that generates text that is inaccurate is a problem — and is far worse than one that generates less fluent text.\n\n\n#### **Q. Can you elaborate on some of the techniques for faithful generation from data, faithful summarization of input text, and methods for controlling ordering of content?**\n\n\nOne of the ways hallucination can occur is when the training data does not contain data or phrases that occur in the input during training. Approaches to faithful generation and summarization primarily rely on techniques involving data augmentation.\n\nFor language generation from data, we use a self-training method to augment the original training data with instances consisting of pairs of structured data, and related text that conveys the information, each of which is completely novel.\n\n\n#### **Remarkably, after training on the augmented data, even simple encoder-decoder models with greedy decoding can generate semantically correct utterances that are as good as state-of-the-art outputs in both automatic and human evaluations of quality.**\n\n\nKathleen McKeown\n\nRemarkably, after training on the augmented data, [even simple encoder-decoder models with greedy decoding](https://arxiv.org/pdf/1911.03373.pdf) can generate semantically correct utterances that are as good as state-of-the-art outputs in both automatic and human evaluations of quality.\n\nWe also studied the degree to which neural sequence-to-sequence models exhibit fine-grained controllability when performing natural language generation from a meaning representation. More specifically, we looked at the effects of controlling the word order.\n\nSuppose we have a meaning representation that gives the name of a restaurant, the location, and the type of food it serves. The sentence could order information so that the restaurant name appears first. Or it could put the food first before the restaurant name and the location. Or it could begin by providing the location.\n\nWe can imagine scenarios in which the different orderings are more appropriate. For example, suppose that someone asks a question, “Where can I find a good place for coffee?” Here, the focus in the response might be on the location. However, for a question like “Where is Café Aramis located and what does it serve?”, we would expect the response to start with “Café Aramis.”\n\nWe systematically compared the effect of four input linearization strategies on controllability and faithfulness. Linearization refers to the task of finding the correct grammatical order of a given set of words. We also used data augmentation to improve performance. We have found that properly aligning input sequences during training leads to highly controllable generation, both when training from scratch, or when fine-tuning a larger pre-trained model.\n\nYou can learn more about McKeown’s research here, and watch her free talk at the virtual Amazon Web Services Machine Learning Summit on June 2 by registering at the link below.\n\nABOUT THE AUTHOR\n\n#### **Staff writer**","render":"<p><img src=\"https://dev-media.amazoncloud.cn/12ee7c8a7e9041aa80906647776bfe30_image.png\" alt=\"image.png\" /></p>\n<p>Kathleen McKeown is the Henry and Gertrude Rothschild professor of computer science at Columbia University, founding director of the school’s Data Science Institute, and an Amazon Scholar.<br />\nGLYNIS CONDON</p>\n<p>The first <a href=\"https://www.amazon.science/latest-news/amazon-web-services-to-hold-machine-learning-summit\" target=\"_blank\">Amazon Web Services (Amazon Web Services) Machine Learning Summit</a> on June 2 will bring together customers, developers, and the science community to learn about advances in the practice of machine learning (ML). The event, which is free to attend, will feature four audience-focused tracks, including Science of Machine Learning.</p>\n<p>The science track is focused on the data science and advanced practitioner audience and will highlight the work Amazon Web Services and Amazon scientists are doing to advance machine learning. The track will comprise six sessions, each lasting 30 minutes, and a 45-minute fireside chat.</p>\n<p>Amazon Science is <a href=\"https://www.amazon.science/latest-news/3-questions-with-marzia-polito-performing-computer-vision-tasks-at-scale-with-few-shot-learning\" target=\"_blank\">featuring </a><a href=\"https://www.amazon.science/latest-news/3-questions-with-michael-kearns-designing-socially-aware-algorithms-and-models\" target=\"_blank\">interviews </a><a href=\"https://www.amazon.science/latest-news/3-questions-with-philip-resnik-analyzing-social-media-to-understand-the-risks-of-suicide\" target=\"_blank\">with </a><a href=\"https://www.amazon.science/latest-news/3-questions-with-ryan-tibshirani-the-science-behind-covidcast-and-pandemic-tracking\" target=\"_blank\">speakers </a>from the Science of Machine Learning track. For the fifth edition of the series, we spoke to <a href=\"https://www.amazon.science/working-at-amazon/amazon-scholar-kathleen-mckeown-takes-stock-of-natural-language-processing-where-we-are-and-where-were-going\" target=\"_blank\">Kathleen McKeown</a>, Henry and Gertrude Rothschild professor of computer science at Columbia University, and founding director of the school’s Data Science Institute. Her research has been focused on text summarization, natural language generation, multi-media explanation, question-answering, and multilingual applications.</p>\n<p>McKeown is also an <a href=\"https://www.amazon.science/scholars\" target=\"_blank\">Amazon Scholar</a>, an expanding group of academics who work on large-scale technical challenges for Amazon while continuing to teach and conduct research at their universities.</p>\n<h4><a id=\"Q_What_is_the_subject_of_your_talk_going_to_be_at_the_ML_Summit_14\"></a><strong>Q. What is the subject of your talk going to be at the ML Summit?</strong></h4>\n<p>Over the last few years, my research has focused on — among other areas — the field of natural language generation. At the Amazon Web Services ML Summit, I will be talking about the need to control choices related to parameters when generating language.</p>\n<p>A program that generates language must make choices about how to express information. For example, algorithms have to determine what content to convey, what words to use, and how to order the words to form a sentence. Deep learning can generate remarkably fluent text. However, it is often not possible to control how these choices are made based on the input goals specified.</p>\n<p>A primary concern for controllable language generation is that the output should be faithful to the input. Neural language generation approaches are known to hallucinate content, resulting in generated text that conveys information that did not appear in the input. Factual inconsistency resulting from model hallucinations can occur at either the entity or the relation level.</p>\n<p>In the former case, a model-generated summary may contain entities that are completely absent in the source document. We also have other kind of hallucinations that are more difficult to spot: relational inconsistencies, where the entities exist in the source document, but the relations between these entities are absent.</p>\n<p>Current language generation techniques that rely on large-scale language models can generate fluent text. However, they find it difficult to control word choice or word order in ways that guarantee that the speaker intent comes across accurately. During my talk, I will detail techniques that allow models to render speaker intent faithfully.</p>\n<h4><a id=\"Q_Why_is_this_topic_especially_relevant_within_the_science_community_today_28\"></a><strong>Q. Why is this topic especially relevant within the science community today?</strong></h4>\n<p>There are many different use cases for natural language generation. These include <a href=\"https://www.amazon.science/blog/automatically-generating-text-from-structured-data\" target=\"_blank\">generating language from data</a>, summarizing text, and generating accurate responses in dialogue and machine translation.</p>\n<p>In all these use cases, hallucination has been a problem. A system that generates text that is inaccurate is a problem — and is far worse than one that generates less fluent text.</p>\n<h4><a id=\"Q_Can_you_elaborate_on_some_of_the_techniques_for_faithful_generation_from_data_faithful_summarization_of_input_text_and_methods_for_controlling_ordering_of_content_35\"></a><strong>Q. Can you elaborate on some of the techniques for faithful generation from data, faithful summarization of input text, and methods for controlling ordering of content?</strong></h4>\n<p>One of the ways hallucination can occur is when the training data does not contain data or phrases that occur in the input during training. Approaches to faithful generation and summarization primarily rely on techniques involving data augmentation.</p>\n<p>For language generation from data, we use a self-training method to augment the original training data with instances consisting of pairs of structured data, and related text that conveys the information, each of which is completely novel.</p>\n<h4><a id=\"Remarkably_after_training_on_the_augmented_data_even_simple_encoderdecoder_models_with_greedy_decoding_can_generate_semantically_correct_utterances_that_are_as_good_as_stateoftheart_outputs_in_both_automatic_and_human_evaluations_of_quality_43\"></a><strong>Remarkably, after training on the augmented data, even simple encoder-decoder models with greedy decoding can generate semantically correct utterances that are as good as state-of-the-art outputs in both automatic and human evaluations of quality.</strong></h4>\n<p>Kathleen McKeown</p>\n<p>Remarkably, after training on the augmented data, <a href=\"https://arxiv.org/pdf/1911.03373.pdf\" target=\"_blank\">even simple encoder-decoder models with greedy decoding</a> can generate semantically correct utterances that are as good as state-of-the-art outputs in both automatic and human evaluations of quality.</p>\n<p>We also studied the degree to which neural sequence-to-sequence models exhibit fine-grained controllability when performing natural language generation from a meaning representation. More specifically, we looked at the effects of controlling the word order.</p>\n<p>Suppose we have a meaning representation that gives the name of a restaurant, the location, and the type of food it serves. The sentence could order information so that the restaurant name appears first. Or it could put the food first before the restaurant name and the location. Or it could begin by providing the location.</p>\n<p>We can imagine scenarios in which the different orderings are more appropriate. For example, suppose that someone asks a question, “Where can I find a good place for coffee?” Here, the focus in the response might be on the location. However, for a question like “Where is Café Aramis located and what does it serve?”, we would expect the response to start with “Café Aramis.”</p>\n<p>We systematically compared the effect of four input linearization strategies on controllability and faithfulness. Linearization refers to the task of finding the correct grammatical order of a given set of words. We also used data augmentation to improve performance. We have found that properly aligning input sequences during training leads to highly controllable generation, both when training from scratch, or when fine-tuning a larger pre-trained model.</p>\n<p>You can learn more about McKeown’s research here, and watch her free talk at the virtual Amazon Web Services Machine Learning Summit on June 2 by registering at the link below.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Staff_writer_62\"></a><strong>Staff writer</strong></h4>\n"}