How to train large graph neural networks efficiently

分布式
机器学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"As Amazon Scholar Chandan Reddy ++[recently observed](https://www.amazon.science/blog/kdd-graph-neural-networks-and-self-supervised-learning)++, graph neural networks are a hot topic at this year’s Conference on Knowledge Discovery and Data Mining (KDD). Graph neural networks create embeddings, or vector representations, of the nodes and edges of a ++[graph](https://en.wikipedia.org/wiki/Graph_(discrete_mathematics))++, enabling new analyses, such as link prediction.\n\nThe embedding of a node typically factors in information not only about that node but also about its immediate neighbors and, often, their neighbors, too. In many real-world cases — the graph of Twitter users and their followers, for instance — a given node can have thousands or even millions of connections. In such cases, it’s not practical to account for all of a node’s neighbors. Instead, researchers have developed sampling methods that select subsets of the immediate neighbors for use in embedding.\n\nIn a paper we presented at KDD, my colleagues and I describe a new sampling strategy for training graph neural network models with a combination of CPUs and GPUs. In that context — which is common in real-world applications — our method reduces the amount of data transferred from CPU to GPU, greatly improving efficiency. In experiments, our method was two to 14 times as fast as prior methods, depending on the datasets used, while achieving accuracies that were as high or even higher.\n\n![下载 3.gif](https://dev-media.amazoncloud.cn/56fc757067834c32b31b8e0313a6d5bd_%E4%B8%8B%E8%BD%BD%20%283%29.gif)\n\nBy caching data about graph nodes in GPU memory, global neighbor sampling dramatically reduces the amount of data transferred from the CPU to the GPU during the training of large graph neural networks.\n\n#### **Mixed CPU-GPU training**\n\nGPUs offer the most efficient way to perform the tensor operations used to train neural networks, but they have limited memory. To train graph neural networks on graphs that are too large to fit in GPU memory, we typically use the CPU to create minibatches of randomly selected graph nodes and edges, which we send to the GPU, along with data describing each node — the node features. \n\nTo generate a minibatch, we need to sample neighbors for each target node — each node that’s being embedded — and, if necessary, neighbors of the sampled neighbors as well. This recursive neighbor sampling generates minibatches that require a large transfer of data between the CPU and the GPU. In our paper, we report a set of experiments showing that, with existing sampling strategies, copying node features of a minibatch from CPU to GPUs is the single most time-consuming aspect of model training.\n\n#### **Global neighbor sampling**\n\nOur sampling approach, which we call GNS, for global neighbor sampling, dramatically reduces the amount of data transferred from the CPU to the GPU.\n\nThe basic strategy is that, before creating a minibatch, we sample a set of nodes from the entire graph and load their features into GPU memory; we call this collection of node features the cache. When creating a minibatch, we sample neighbors of a node by simply retrieving the neighbors that are already in the cache. Only if the cache doesn’t contain enough neighbor nodes do we fetch additional nodes from the CPU.\n\nTo increase the likelihood that the relevant neighbors will be found in the cache, we preferentially sample nodes of high degree — that is, nodes with a large number of connections to other nodes. Sampling likelihood is proportional to the node degree, so the cache will still include a number of relatively low-degree nodes. \n\nBecause we know the probabilities of sampling the nodes in the cache, during embedding, we can weight the cached nodes to ensure a good approximation of the embedding that would have resulted from factoring in all of the neighbors.\n\nIn the paper, we prove that this approach will converge to the optimal model performance as efficiently as one that uses truly random sampling. This means that neither the bias toward high-degree nodes nor the reuse of the same cache for many minibatches should compromise model performance.\n\nOne important consideration is how to efficiently identify the nodes in the cache that are relevant for a given minibatch. Potentially, we could compute the overlap between the list of neighbors for a given node and the list of nodes in the cache. However, that computation is expensive. Instead, on the CPU, we create a subgraph that consists of all the nodes in the cache and all their immediate neighbors. When we assemble a minibatch, we simply look up the cached neighbors from the subgraph for each of its nodes.\n\nIn experiments, we compared our sampling strategy to three other methods on five datasets and found that, in the mixed CPU-GPU setting, ours was at least twice as fast as the second-best strategy on every data set. Two of the three sampling strategies were consistently an order of magnitude slower than ours when trained to achieve comparable accuracies.\n\nIn our experiments, we restricted ourselves to a single CPU and a single GPU. In ongoing work, we are considering how to generalize the method to multiple GPUs and distributed training. For instance, can we cache different sets of nodes on different GPUs and efficiently target each minibatch to the GPU whose cache offers the best match?\n\nABOUT THE AUTHOR\n\n#### **[Da Zheng](https://www.amazon.science/author/da-zheng)**\n\nDa Zheng is a senior applied scientist in the Amazon Web Services Deep Learning group.\n\n\n","render":"<p>As Amazon Scholar Chandan Reddy <ins><a href=\"https://www.amazon.science/blog/kdd-graph-neural-networks-and-self-supervised-learning\" target=\"_blank\">recently observed</a></ins>, graph neural networks are a hot topic at this year’s Conference on Knowledge Discovery and Data Mining (KDD). Graph neural networks create embeddings, or vector representations, of the nodes and edges of a <ins><a href=\"https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)\" target=\"_blank\">graph</a></ins>, enabling new analyses, such as link prediction.</p>\n<p>The embedding of a node typically factors in information not only about that node but also about its immediate neighbors and, often, their neighbors, too. In many real-world cases — the graph of Twitter users and their followers, for instance — a given node can have thousands or even millions of connections. In such cases, it’s not practical to account for all of a node’s neighbors. Instead, researchers have developed sampling methods that select subsets of the immediate neighbors for use in embedding.</p>\n<p>In a paper we presented at KDD, my colleagues and I describe a new sampling strategy for training graph neural network models with a combination of CPUs and GPUs. In that context — which is common in real-world applications — our method reduces the amount of data transferred from CPU to GPU, greatly improving efficiency. In experiments, our method was two to 14 times as fast as prior methods, depending on the datasets used, while achieving accuracies that were as high or even higher.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/56fc757067834c32b31b8e0313a6d5bd_%E4%B8%8B%E8%BD%BD%20%283%29.gif\" alt=\"下载 3.gif\" /></p>\n<p>By caching data about graph nodes in GPU memory, global neighbor sampling dramatically reduces the amount of data transferred from the CPU to the GPU during the training of large graph neural networks.</p>\n<h4><a id=\"Mixed_CPUGPU_training_10\"></a><strong>Mixed CPU-GPU training</strong></h4>\n<p>GPUs offer the most efficient way to perform the tensor operations used to train neural networks, but they have limited memory. To train graph neural networks on graphs that are too large to fit in GPU memory, we typically use the CPU to create minibatches of randomly selected graph nodes and edges, which we send to the GPU, along with data describing each node — the node features.</p>\n<p>To generate a minibatch, we need to sample neighbors for each target node — each node that’s being embedded — and, if necessary, neighbors of the sampled neighbors as well. This recursive neighbor sampling generates minibatches that require a large transfer of data between the CPU and the GPU. In our paper, we report a set of experiments showing that, with existing sampling strategies, copying node features of a minibatch from CPU to GPUs is the single most time-consuming aspect of model training.</p>\n<h4><a id=\"Global_neighbor_sampling_16\"></a><strong>Global neighbor sampling</strong></h4>\n<p>Our sampling approach, which we call GNS, for global neighbor sampling, dramatically reduces the amount of data transferred from the CPU to the GPU.</p>\n<p>The basic strategy is that, before creating a minibatch, we sample a set of nodes from the entire graph and load their features into GPU memory; we call this collection of node features the cache. When creating a minibatch, we sample neighbors of a node by simply retrieving the neighbors that are already in the cache. Only if the cache doesn’t contain enough neighbor nodes do we fetch additional nodes from the CPU.</p>\n<p>To increase the likelihood that the relevant neighbors will be found in the cache, we preferentially sample nodes of high degree — that is, nodes with a large number of connections to other nodes. Sampling likelihood is proportional to the node degree, so the cache will still include a number of relatively low-degree nodes.</p>\n<p>Because we know the probabilities of sampling the nodes in the cache, during embedding, we can weight the cached nodes to ensure a good approximation of the embedding that would have resulted from factoring in all of the neighbors.</p>\n<p>In the paper, we prove that this approach will converge to the optimal model performance as efficiently as one that uses truly random sampling. This means that neither the bias toward high-degree nodes nor the reuse of the same cache for many minibatches should compromise model performance.</p>\n<p>One important consideration is how to efficiently identify the nodes in the cache that are relevant for a given minibatch. Potentially, we could compute the overlap between the list of neighbors for a given node and the list of nodes in the cache. However, that computation is expensive. Instead, on the CPU, we create a subgraph that consists of all the nodes in the cache and all their immediate neighbors. When we assemble a minibatch, we simply look up the cached neighbors from the subgraph for each of its nodes.</p>\n<p>In experiments, we compared our sampling strategy to three other methods on five datasets and found that, in the mixed CPU-GPU setting, ours was at least twice as fast as the second-best strategy on every data set. Two of the three sampling strategies were consistently an order of magnitude slower than ours when trained to achieve comparable accuracies.</p>\n<p>In our experiments, we restricted ourselves to a single CPU and a single GPU. In ongoing work, we are considering how to generalize the method to multiple GPUs and distributed training. For instance, can we cache different sets of nodes on different GPUs and efficiently target each minibatch to the GPU whose cache offers the best match?</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Da_Zhenghttpswwwamazonscienceauthordazheng_36\"></a><strong><a href=\"https://www.amazon.science/author/da-zheng\" target=\"_blank\">Da Zheng</a></strong></h4>\n<p>Da Zheng is a senior applied scientist in the Amazon Web Services Deep Learning group.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭
contact-us