{"value":"As a Senior Program Committee member at this year’s Knowledge Discovery and Data Mining Conference (KDD), with a wide perspective on paper submissions, Chandan Reddy noticed two major research trends: work on graph neural networks and on self-supervised learning.\n\n![image.png](https://dev-media.amazoncloud.cn/9e2311fa83004869893ea031e2521fb5_image.png)\n\nChandan Reddy, an Amazon Scholar and professor of computer science at Virginia Tech.\n\n“++[Graph neural networks](https://www.amazon.science/tag/graph-neural-networks)++ has been an extremely hot topic of research in recent years, and at this year’s KDD conference as well,” says Reddy, an Amazon Scholar and a professor of computer science at Virginia Tech. “In machine learning, you often assume that the different data samples are independent of each other. But in the real world, you always have more information about relationships between two entities. If you have two people, there are connections between them. Knowing about your neighbor, we can start to predict something about you. So naturally you have a lot of data that is being collected that can be represented in the form of graphs.”\n\nIn the context of knowledge discovery, the nodes of the graph usually represent entities, and the edges usually represent relationships between them. Graph neural networks provide a way to represent nodes as vectors in a multidimensional space, such that nodes’ locations in the space encode information about their relationships to each other. Graph neural networks can, for instance, help identify missing edges in a graph — that is, previously unnoticed relationships between entities.\n\nWith ++[self-supervised learning](https://www.amazon.science/tag/self-supervised-learning)++, a machine learning model is trained, using unlabeled data, on a proxy task that is related to its target task but not identical to it. Then it’s fine tuned on labeled data. If the proxy task is well chosen, this can dramatically reduce the need for labeled data.\n\n#### **Amazon at KDD**\n\n++[Read more](https://www.amazon.science/conferences-and-events/KDD-2021)++ about Amazon's involvement at KDD — papers, program committee membership, and participation in workshops and tutorials.\n\nSelf-supervised learning “was introduced in natural-language processing about three years back through this BERT model and some other masked language-modeling approaches,” Reddy explains. “It has now become kind of a mainstream topic in the data-mining community.”\n\nBERT is a language model, meaning that it encodes the probabilities of different sequences of words in a particular language. It’s trained on unlabeled texts in which individual words have been randomly masked out, and its proxy task is to fill in the missing words.\n\n“In graph neural networks, the analogy is that you remove an edge and you try to predict whether there was an edge or not,” Reddy explains. “Based on that, you can then use that information to learn the dependencies between the nodes.”\n\n#### **Application-specific representations**\n\nBut, Reddy explains, while the same basic BERT model has proved useful for a wide range of problems in natural-language processing (NLP), the ideal vector representation of a node in a knowledge network is very much dependent on the ultimate application. In part, this is because knowledge networks can have heterogeneous data types. A graph depicting online shoppers’ buying preferences, for example, could have nodes representing classes of products, nodes representing specific products, and nodes representing product features, such as battery capacity or fabric type.\n\n#### **In machine learning, you often assume that the different data samples are independent of each other. But in the real world, you always have more information about relationships between two entities.**\n\nChandan Reddy\n\n“When you have a link prediction model, where you want to predict whether a link can be formed between these two nodes, you don't want to learn a single representation for a particular node,” Reddy explains. “If a person has to be recommended a book, the representation has to be different from the same person being recommended movie. You would want a book representation that is different when it is being recommended to a group of people who are interested in this genre of books or if it's being recommended to a person who's interested in a different genre of books. In some sense you have to have ++[a multiaspect or a multiview representation](https://people.cs.vt.edu/~reddy/papers/WWW21.pdf)++ of this node.”\n\nIn his own research, Reddy frequently works on knowledge discovery for health care, where the problem of data heterogeneity is particularly acute.\n\n“Some of these lab values, for example, you are monitoring over time,” he explains. “The patient is admitted to an ICU, and blood pressure, blood work is done on a regular basis every 12 hours. So you have a time series data, which is sequential in nature. You have demographic data, which is static in nature. And then you have clinical notes, which are again sequential, but they’re not temporal, whereas in time series it is temporal. And you have image data in the form of x-rays and CT scans.”\n\n“Now we have to come up with a deep-learning model that can leverage all these different forms of data. Health care is just one application, but you can think of so many other applications where leveraging such multimodal data is becoming an important problem. In real-world data, you don't just see data in one particular form. You have multiple heterogeneous forms of data that are collected about any particular entity.”\n\n#### **Learning efficiently**\n\nSelf-supervised learning is, fundamentally, a technique for doing machine learning more efficiently: labeling data is inefficient, and leveraging unlabeled data reduces dependence on labeled data. In addition to serving as a Senior Program Committee member at KDD, Reddy is also one of the organizers of the conference’s ++[Workshop on Data-Efficient Machine Learning](https://demalworkshop.github.io/kdd2021/index.html)++, together with Amazon’s ++[Nikhil Rao](https://www.amazon.science/author/nikhil-rao)++ and ++[Sumeet Khatariya](https://www.amazon.science/author/sumeet-katariya)++.\n\n“People talk a lot about domain adaptation in the presence of limited data,” Reddy says. “There are different topics related to it like few-shot or zero-shot learning, transfer learning, meta-learning, multitask learning, et cetera. Some people talk about out-of-domain distribution. There are several concepts that try to achieve data-efficient learning in real-world applications. We wanted to have all these discussions in a more coherent manner in this workshop, so we can share knowledge, we can see what works, what doesn't. We tried to bring people from different communities so that they can learn both success and failure stories of different approaches in various domains.\n\n“Some of these graph papers that were published last year are basically inspired by a simple technique that was borrowed from the NLP and computer vision communities. We are trying to see if we can share more recent trends and knowledge from these domains.”\n\nABOUT THE AUTHOR\n\n#### **[Larry Hardesty](https://www.amazon.science/author/larry-hardesty)**\n\nLarry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","render":"<p>As a Senior Program Committee member at this year’s Knowledge Discovery and Data Mining Conference (KDD), with a wide perspective on paper submissions, Chandan Reddy noticed two major research trends: work on graph neural networks and on self-supervised learning.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/9e2311fa83004869893ea031e2521fb5_image.png\" alt=\"image.png\" /></p>\n<p>Chandan Reddy, an Amazon Scholar and professor of computer science at Virginia Tech.</p>\n<p>“<ins><a href=\"https://www.amazon.science/tag/graph-neural-networks\" target=\"_blank\">Graph neural networks</a></ins> has been an extremely hot topic of research in recent years, and at this year’s KDD conference as well,” says Reddy, an Amazon Scholar and a professor of computer science at Virginia Tech. “In machine learning, you often assume that the different data samples are independent of each other. But in the real world, you always have more information about relationships between two entities. If you have two people, there are connections between them. Knowing about your neighbor, we can start to predict something about you. So naturally you have a lot of data that is being collected that can be represented in the form of graphs.”</p>\n<p>In the context of knowledge discovery, the nodes of the graph usually represent entities, and the edges usually represent relationships between them. Graph neural networks provide a way to represent nodes as vectors in a multidimensional space, such that nodes’ locations in the space encode information about their relationships to each other. Graph neural networks can, for instance, help identify missing edges in a graph — that is, previously unnoticed relationships between entities.</p>\n<p>With <ins><a href=\"https://www.amazon.science/tag/self-supervised-learning\" target=\"_blank\">self-supervised learning</a></ins>, a machine learning model is trained, using unlabeled data, on a proxy task that is related to its target task but not identical to it. Then it’s fine tuned on labeled data. If the proxy task is well chosen, this can dramatically reduce the need for labeled data.</p>\n<h4><a id=\"Amazon_at_KDD_12\"></a><strong>Amazon at KDD</strong></h4>\n<p><ins><a href=\"https://www.amazon.science/conferences-and-events/KDD-2021\" target=\"_blank\">Read more</a></ins> about Amazon’s involvement at KDD — papers, program committee membership, and participation in workshops and tutorials.</p>\n<p>Self-supervised learning “was introduced in natural-language processing about three years back through this BERT model and some other masked language-modeling approaches,” Reddy explains. “It has now become kind of a mainstream topic in the data-mining community.”</p>\n<p>BERT is a language model, meaning that it encodes the probabilities of different sequences of words in a particular language. It’s trained on unlabeled texts in which individual words have been randomly masked out, and its proxy task is to fill in the missing words.</p>\n<p>“In graph neural networks, the analogy is that you remove an edge and you try to predict whether there was an edge or not,” Reddy explains. “Based on that, you can then use that information to learn the dependencies between the nodes.”</p>\n<h4><a id=\"Applicationspecific_representations_22\"></a><strong>Application-specific representations</strong></h4>\n<p>But, Reddy explains, while the same basic BERT model has proved useful for a wide range of problems in natural-language processing (NLP), the ideal vector representation of a node in a knowledge network is very much dependent on the ultimate application. In part, this is because knowledge networks can have heterogeneous data types. A graph depicting online shoppers’ buying preferences, for example, could have nodes representing classes of products, nodes representing specific products, and nodes representing product features, such as battery capacity or fabric type.</p>\n<h4><a id=\"In_machine_learning_you_often_assume_that_the_different_data_samples_are_independent_of_each_other_But_in_the_real_world_you_always_have_more_information_about_relationships_between_two_entities_26\"></a><strong>In machine learning, you often assume that the different data samples are independent of each other. But in the real world, you always have more information about relationships between two entities.</strong></h4>\n<p>Chandan Reddy</p>\n<p>“When you have a link prediction model, where you want to predict whether a link can be formed between these two nodes, you don’t want to learn a single representation for a particular node,” Reddy explains. “If a person has to be recommended a book, the representation has to be different from the same person being recommended movie. You would want a book representation that is different when it is being recommended to a group of people who are interested in this genre of books or if it’s being recommended to a person who’s interested in a different genre of books. In some sense you have to have <ins><a href=\"https://people.cs.vt.edu/~reddy/papers/WWW21.pdf\" target=\"_blank\">a multiaspect or a multiview representation</a></ins> of this node.”</p>\n<p>In his own research, Reddy frequently works on knowledge discovery for health care, where the problem of data heterogeneity is particularly acute.</p>\n<p>“Some of these lab values, for example, you are monitoring over time,” he explains. “The patient is admitted to an ICU, and blood pressure, blood work is done on a regular basis every 12 hours. So you have a time series data, which is sequential in nature. You have demographic data, which is static in nature. And then you have clinical notes, which are again sequential, but they’re not temporal, whereas in time series it is temporal. And you have image data in the form of x-rays and CT scans.”</p>\n<p>“Now we have to come up with a deep-learning model that can leverage all these different forms of data. Health care is just one application, but you can think of so many other applications where leveraging such multimodal data is becoming an important problem. In real-world data, you don’t just see data in one particular form. You have multiple heterogeneous forms of data that are collected about any particular entity.”</p>\n<h4><a id=\"Learning_efficiently_38\"></a><strong>Learning efficiently</strong></h4>\n<p>Self-supervised learning is, fundamentally, a technique for doing machine learning more efficiently: labeling data is inefficient, and leveraging unlabeled data reduces dependence on labeled data. In addition to serving as a Senior Program Committee member at KDD, Reddy is also one of the organizers of the conference’s <ins><a href=\"https://demalworkshop.github.io/kdd2021/index.html\" target=\"_blank\">Workshop on Data-Efficient Machine Learning</a></ins>, together with Amazon’s <ins><a href=\"https://www.amazon.science/author/nikhil-rao\" target=\"_blank\">Nikhil Rao</a></ins> and <ins><a href=\"https://www.amazon.science/author/sumeet-katariya\" target=\"_blank\">Sumeet Khatariya</a></ins>.</p>\n<p>“People talk a lot about domain adaptation in the presence of limited data,” Reddy says. “There are different topics related to it like few-shot or zero-shot learning, transfer learning, meta-learning, multitask learning, et cetera. Some people talk about out-of-domain distribution. There are several concepts that try to achieve data-efficient learning in real-world applications. We wanted to have all these discussions in a more coherent manner in this workshop, so we can share knowledge, we can see what works, what doesn’t. We tried to bring people from different communities so that they can learn both success and failure stories of different approaches in various domains.</p>\n<p>“Some of these graph papers that were published last year are basically inspired by a simple technique that was borrowed from the NLP and computer vision communities. We are trying to see if we can share more recent trends and knowledge from these domains.”</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Larry_Hardestyhttpswwwamazonscienceauthorlarryhardesty_48\"></a><strong><a href=\"https://www.amazon.science/author/larry-hardesty\" target=\"_blank\">Larry Hardesty</a></strong></h4>\n<p>Larry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.</p>\n"}