Amazon at WSDM: The future of graph neural networks

机器学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"George Karypis, a senior principal scientist at Amazon Web Services, is one of the keynote speakers at this year’s Conference on Web Search and Data Mining (++[WSDM](https://www.amazon.science/conferences-and-events/wsdm-2022)++), and his topic will be [graph neural networks](https://www.amazon.science/tag/graph-neural-networks), his [chief area of research](https://www.amazon.science/author/george-karypis) at Amazon.\n\n![image.png](https://dev-media.amazoncloud.cn/570b393932554ed387d33d975a9f151d_image.png)\n\nGeorge Karypis, a senior principal scientist at Amazon Web Services.\n\n“A lot of the WSDM crowd are looking at relations between entities, especially if you think in terms of the web and social networks,” Karypis says. “If I'm going to develop deep-learning techniques to compute a representation of a graph, then a graph neural network is the right formalism to do that.”\n\nA graph consists of nodes, often depicted as circles, and edges, often depicted as line segments connecting nodes. Graphs are infinitely expressive: the nodes might represent atoms in a molecule and the edges the bonds between them; or, as in a knowledge graph, the nodes could represent entities and the edges relationships between them; or, as in a recommendation engine, the nodes could represent both customers and products, and edges could indicate both similarity between products and which customers have bought which products.\n\nGraph neural networks (GNNs) represent information contained in graphs as vectors, so other machine learning models can make use of that information.\n\n“In the standard machine learning workflow nowadays, we compute a representation of a piece of text,” Karypis says. “I then use that representation as input to a downstream model. I either do an end-to-end fine tuning of my language model or just use it the way it is, as a kind of a static representation.\n\n“We do exactly the same thing for graphs using graph neural networks. For example, in many drug discovery use cases, I can pretrain a graph neural network so that it learns how to compute a representation of small molecules. Then I can take that representation as input to another model that predicts various physicochemical properties of the molecules.”\n\nIn addition to providing inputs to downstream models, GNNs can also be used to predict properties of the graphs themselves — deducing missing edges, for instance.\n\n“In that case, you still compute representations of the two nodes that potentially are connected, and then you learn a model that answers the question, ‘Given the representations, are these nodes connected?’” Karypis says. “So you do pretty much the same thing there as well.”\n\n#### **Scope of representation**\n\nGraphs are so useful because their structure encodes information beyond the information encoded in individual nodes — the characteristics of particular atoms, products, or customers, for instance. One outstanding research question in the field is how much of that structural information a GNN representation can capture.\n\nComputing node representations is an iterative process. The first step is to compute a representation of each node. The next step is to update each node’s representation, taking into account both its previous representation and the representations of its immediate neighbors. Every repetition of this process extends the scope of the representation by one hop.\n\n![下载 1.gif](https://dev-media.amazoncloud.cn/ef2f13cde2ac4718ad88c42c8b07711f_%E4%B8%8B%E8%BD%BD%20%281%29.gif)\n\nA demonstration of the iterative process a graph neural network might use to condense the information in a two-hop graph into a single vector. Relationships between entities — such as \"produce\" and \"write\" in a movie database (red and yellow arrows, respectively) — are encoded in the initial representations (level-0 embeddings) of the entities themselves (red and orange blocks). Animation from the blog post \"++[Combining knowledge graphs, quickly and accurately](https://www.amazon.science/blog/combining-knowledge-graphs-quickly-and-accurately)++\".\n\n**STACY REILLY**\n\n“The problem is that if you keep on doing that, then pretty much every node will end up becoming the same,” Karypis says. “On GNNs we call that oversmoothing. For some networks, like those coming from natural graphs, this often happens after a very small number of steps. Think of social networks and the ++[Kevin Bacon game](https://en.wikipedia.org/wiki/Six_Degrees_of_Kevin_Bacon)++. It does not take many hops before you hit a large fraction of the nodes.\n\n“In the past year or two, there has been a lot of research work in terms of people trying to see how I can still get information from faraway neighbors but not get to the point that every node becomes pretty much the same because I have oversmoothed all the information?”\n\n#### **Questions of translation**\n\nAnother outstanding research question, Karypis says, is how to represent data in graph form in the first place, because this has a significant effect on GNN performance.\n\n“There are certain application domains where we've been very successful in developing accurate GNN-based models,” Karypis says. “For example, for domains in which the underlying data is already a graph, such as small and large molecules or knowledge graphs, we have very good GNN models.\n\n“For domains for which there are multiple ways to model the underlying data via a graph, it often takes a lot of trial and error to develop successful GNN-based approaches because we need to consider the interplay between graph and GNN models.\n\n#### **GNN models that can tolerate variations in how the underlying data is modeled will go a long way toward reducing the effort required to develop successful GNN-based approaches.**\n\nGeorge Karypis\n\n“If I look at a relational database, let's say I have information about you, like your address. I can choose to create a table for the street name, a table for the zip code, and a table for the city. Then I can create a table for the address. Its rows will have a foreign key to the zip code table, a ++[foreign key](https://en.wikipedia.org/wiki/Foreign_key)++ to the street name, and a foreign key to the city table. Then, in the table that stores information about you, I can have a foreign key to that address table.\n\n“Alternatively, I can choose to create three different columns in the main table, with street number, city, and zip code. Now If I'm going to view those things as a graph, in one case, everything will be pretty much directly connected. If I have a node for a particular row, that node will be connected to another node that has the street number and street name and so forth. As opposed to the other case, where I'm going to have a pointer to another table that will have the pointers to the other three tables that contain information about the other stuff.\n\n“All of a sudden, something will go from being one hop away to potentially being three hops away or even more. That creates a very different topology when I'm trying to aggregate information within the context of a GNN. Developing GNN models that can tolerate variations on how the underlying data is modeled will go a long way toward reducing the effort required to develop successful GNN-based approaches.\n\n“GNNs are one of the hottest areas of deep-learning research and are being used in an ever-growing set of domains and applications. I think that in the field of GNN research, there are many things that we still do not know. It's a field that is very much in the early days.”\n\nABOUT THE AUTHOR\n\n#### **[Larry Hardesty](https://www.amazon.science/author/larry-hardesty)**\n\nLarry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.\n\n\n","render":"<p>George Karypis, a senior principal scientist at Amazon Web Services, is one of the keynote speakers at this year’s Conference on Web Search and Data Mining (<ins><a href=\\"https://www.amazon.science/conferences-and-events/wsdm-2022\\" target=\\"_blank\\">WSDM</a></ins>), and his topic will be <a href=\\"https://www.amazon.science/tag/graph-neural-networks\\" target=\\"_blank\\">graph neural networks</a>, his <a href=\\"https://www.amazon.science/author/george-karypis\\" target=\\"_blank\\">chief area of research</a> at Amazon.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/570b393932554ed387d33d975a9f151d_image.png\\" alt=\\"image.png\\" /></p>\n<p>George Karypis, a senior principal scientist at Amazon Web Services.</p>\n<p>“A lot of the WSDM crowd are looking at relations between entities, especially if you think in terms of the web and social networks,” Karypis says. “If I’m going to develop deep-learning techniques to compute a representation of a graph, then a graph neural network is the right formalism to do that.”</p>\n<p>A graph consists of nodes, often depicted as circles, and edges, often depicted as line segments connecting nodes. Graphs are infinitely expressive: the nodes might represent atoms in a molecule and the edges the bonds between them; or, as in a knowledge graph, the nodes could represent entities and the edges relationships between them; or, as in a recommendation engine, the nodes could represent both customers and products, and edges could indicate both similarity between products and which customers have bought which products.</p>\n<p>Graph neural networks (GNNs) represent information contained in graphs as vectors, so other machine learning models can make use of that information.</p>\n<p>“In the standard machine learning workflow nowadays, we compute a representation of a piece of text,” Karypis says. “I then use that representation as input to a downstream model. I either do an end-to-end fine tuning of my language model or just use it the way it is, as a kind of a static representation.</p>\n<p>“We do exactly the same thing for graphs using graph neural networks. For example, in many drug discovery use cases, I can pretrain a graph neural network so that it learns how to compute a representation of small molecules. Then I can take that representation as input to another model that predicts various physicochemical properties of the molecules.”</p>\n<p>In addition to providing inputs to downstream models, GNNs can also be used to predict properties of the graphs themselves — deducing missing edges, for instance.</p>\n<p>“In that case, you still compute representations of the two nodes that potentially are connected, and then you learn a model that answers the question, ‘Given the representations, are these nodes connected?’” Karypis says. “So you do pretty much the same thing there as well.”</p>\n<h4><a id=\\"Scope_of_representation_20\\"></a><strong>Scope of representation</strong></h4>\\n<p>Graphs are so useful because their structure encodes information beyond the information encoded in individual nodes — the characteristics of particular atoms, products, or customers, for instance. One outstanding research question in the field is how much of that structural information a GNN representation can capture.</p>\n<p>Computing node representations is an iterative process. The first step is to compute a representation of each node. The next step is to update each node’s representation, taking into account both its previous representation and the representations of its immediate neighbors. Every repetition of this process extends the scope of the representation by one hop.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/ef2f13cde2ac4718ad88c42c8b07711f_%E4%B8%8B%E8%BD%BD%20%281%29.gif\\" alt=\\"下载 1.gif\\" /></p>\n<p>A demonstration of the iterative process a graph neural network might use to condense the information in a two-hop graph into a single vector. Relationships between entities — such as “produce” and “write” in a movie database (red and yellow arrows, respectively) — are encoded in the initial representations (level-0 embeddings) of the entities themselves (red and orange blocks). Animation from the blog post “<ins><a href=\\"https://www.amazon.science/blog/combining-knowledge-graphs-quickly-and-accurately\\" target=\\"_blank\\">Combining knowledge graphs, quickly and accurately</a></ins>”.</p>\n<p><strong>STACY REILLY</strong></p>\\n<p>“The problem is that if you keep on doing that, then pretty much every node will end up becoming the same,” Karypis says. “On GNNs we call that oversmoothing. For some networks, like those coming from natural graphs, this often happens after a very small number of steps. Think of social networks and the <ins><a href=\\"https://en.wikipedia.org/wiki/Six_Degrees_of_Kevin_Bacon\\" target=\\"_blank\\">Kevin Bacon game</a></ins>. It does not take many hops before you hit a large fraction of the nodes.</p>\n<p>“In the past year or two, there has been a lot of research work in terms of people trying to see how I can still get information from faraway neighbors but not get to the point that every node becomes pretty much the same because I have oversmoothed all the information?”</p>\n<h4><a id=\\"Questions_of_translation_36\\"></a><strong>Questions of translation</strong></h4>\\n<p>Another outstanding research question, Karypis says, is how to represent data in graph form in the first place, because this has a significant effect on GNN performance.</p>\n<p>“There are certain application domains where we’ve been very successful in developing accurate GNN-based models,” Karypis says. “For example, for domains in which the underlying data is already a graph, such as small and large molecules or knowledge graphs, we have very good GNN models.</p>\n<p>“For domains for which there are multiple ways to model the underlying data via a graph, it often takes a lot of trial and error to develop successful GNN-based approaches because we need to consider the interplay between graph and GNN models.</p>\n<h4><a id=\\"GNN_models_that_can_tolerate_variations_in_how_the_underlying_data_is_modeled_will_go_a_long_way_toward_reducing_the_effort_required_to_develop_successful_GNNbased_approaches_44\\"></a><strong>GNN models that can tolerate variations in how the underlying data is modeled will go a long way toward reducing the effort required to develop successful GNN-based approaches.</strong></h4>\\n<p>George Karypis</p>\n<p>“If I look at a relational database, let’s say I have information about you, like your address. I can choose to create a table for the street name, a table for the zip code, and a table for the city. Then I can create a table for the address. Its rows will have a foreign key to the zip code table, a <ins><a href=\\"https://en.wikipedia.org/wiki/Foreign_key\\" target=\\"_blank\\">foreign key</a></ins> to the street name, and a foreign key to the city table. Then, in the table that stores information about you, I can have a foreign key to that address table.</p>\n<p>“Alternatively, I can choose to create three different columns in the main table, with street number, city, and zip code. Now If I’m going to view those things as a graph, in one case, everything will be pretty much directly connected. If I have a node for a particular row, that node will be connected to another node that has the street number and street name and so forth. As opposed to the other case, where I’m going to have a pointer to another table that will have the pointers to the other three tables that contain information about the other stuff.</p>\n<p>“All of a sudden, something will go from being one hop away to potentially being three hops away or even more. That creates a very different topology when I’m trying to aggregate information within the context of a GNN. Developing GNN models that can tolerate variations on how the underlying data is modeled will go a long way toward reducing the effort required to develop successful GNN-based approaches.</p>\n<p>“GNNs are one of the hottest areas of deep-learning research and are being used in an ever-growing set of domains and applications. I think that in the field of GNN research, there are many things that we still do not know. It’s a field that is very much in the early days.”</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Larry_Hardestyhttpswwwamazonscienceauthorlarryhardesty_58\\"></a><strong><a href=\\"https://www.amazon.science/author/larry-hardesty\\" target=\\"_blank\\">Larry Hardesty</a></strong></h4>\n<p>Larry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭