Using supervised learning to train models for image clustering

机器学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Most machine learning models use supervised learning, meaning they’re trained on annotated data, which is costly and time consuming to acquire.\n\nThe chief method for doing unsupervised learning, which doesn’t require annotated data, is clustering, or grouping data points together by salient characteristics. The idea is that each cluster represents some category, such as photos of the same person or the same species of animal.\n\nTo decide where to draw boundaries between clusters, clustering algorithms typically rely on heuristics, such as a threshold distance between cluster centers or the shape of the clusters’ distributions. In a ++[paper](https://www.amazon.science/publications/learning-hierarchical-graph-neural-networks-for-image-clustering)++ we’re presenting at the International Conference on Computer Vision (++[ICCV](https://www.amazon.science/conferences-and-events/iccv-2021)++), we propose, instead, to learn from data how to draw boundaries.\n\nWe first represent visual data using a ++[graph](https://en.wikipedia.org/wiki/Graph_(discrete_mathematics))++, then use a graph neural network (GNN) to produce vector representations of the graph’s nodes. So far, we follow on previous work.\n\nInstead of relying on heuristics, however, we use labeled data to learn how to cluster the vectors and, crucially, to decide how fine-grained those clusters should be. We call the labeled data meta-training data, since the goal is to learn a general clustering technique, not a specific classification model. \n\nIn particular, we propose a hierarchical GNN, meaning that it creates clusters by adding edges between nodes of a graph, then adds edges between the clusters to create still larger clusters, and so on, iterating until it decides that no more edges should be added.\n\n![image.png](https://dev-media.amazoncloud.cn/5422bc75066b461d845c33fdb49d6e80_image.png)\n\nA schematic of our graph-based hierarchical clustering approach. The colors of the image borders and of the graph nodes indicate data types (in this case, photos of the same actor). Our approach is hierarchical, iteratively treating small clusters generated at one level as the units of clustering for the next level. We call our base model LANDER, for link approximation and density estimation refinement, and our hierarchical clustering method Hi-LANDER.\n\nFinally, we apply our hierarchical clustering technique to test sets whose classification categories are disjoint with those of the meta-training data. In our experiments we found that, compared to previous GNN-based supervised and unsupervised approaches, ours increased the F-score — which factors in both false positives and false negatives — by an average of 49% and 47%, respectively.\n\n#### **Constructing the graph**\n\nIn our paper, we investigate the case in which we are training a model to cluster visual data that is similar to the meta-training data but has no class overlaps with it. For instance, the meta-training data might be faces of movie stars, while the target application is to cluster faces of politicians, athletes, or other public figures.\n\nThe first step in our process is to use the meta-training data to build a supervised classifier: if the meta-training data is faces of movie stars, the classifier labels input images with names of movie stars.\n\nThe classifier is an encoder-decoder model: the encoder produces a fixed-length vector representation of the input, or feature vector, and the decoder uses that vector to predict a label. Once we’ve trained the classifier, however, we use only the encoder for the rest of the process.\n\nThe feature vectors define points in a multidimensional space. On the basis of the vectors’ locations, we construct a graph, in which each node represents an image, and each image’s k nearest neighbors in the feature space are connected to it (share edges with it) in the graph.\n\nThis graph will serve as the input to the clustering model, which is also an encoder-decoder model. The encoder is a GNN, which produces a vector representation of each node in the graph, based on that node’s feature vector and those of the nodes it’s connected to. Call this vector the node embedding.\n\n#### **The clustering model**\n\nWe adopt a hierarchical approach to clustering. Based on the node embeddings, the clustering model predicts edges between nodes. A cluster is defined as a group of nodes each of which shares an edge with at least one other node in the group and none of which shares an edge with any node outside the group.\n\nNote that the goal of the clustering model is not just to reproduce the nearest-neighbor graph but to link nodes that represent data of the same type. The nearest-neighbor linkages are useful for predicting clustering linkages, but they are not identical with them.\n\nAfter the first pass through the data, we aggregate each cluster into a single, representative “supernode” and repeat the whole process. That is, we create edges between each supernode and its k nearest neighbors, pass the resulting graph through the same GNN, and predict edges based on the supernode embeddings. We repeat this process until the clustering model predicts no edges between nodes.\n\nWe train our clustering model on two different objectives. One is to correctly predict links between nodes, where a correct link is one that picks out two representatives of the same data type in the meta-training data (say, two photos of the same actor).\n\nWe also train the model to correctly predict the density of a given data type in a given graph neighborhood. That is, for each node, the model should predict the proportion of nearby neighbors of the same data type.\n\nPast research on clustering has shown that factoring in data density improves results. Previously, however, link prediction and data density prediction were handled by separate models. By using a single model to jointly predict both, we significantly increase computational efficiency. We believe that the combination also contributes to our increase in accuracy.\n\nThe other novelty of our approach is that, because of our hierarchical processing scheme, we optimize clustering across the entire input graph. Previous approaches would first divide the graph into subgraphs, then perform inference within subgraphs. This prevents natural parallelization, which is runtime efficient, and limits the effectiveness of information flow through the graph. The full graph-wide processing is another reason for our model’s improved efficiency.\n\nIn experiments, we considered two different sets of meta-training data. One consisted of closeups of human faces, the other of images of particular animal species. We tested the model trained on human faces on two other datasets, whose data categories had zero or very little overlap with those of the meta-training set — 0% and less than 2%. We tested the model trained on animal species on a dataset of previously unseen species. Across both models and the three test sets, our average improvements over previous GNN-based clustering models and unsupervised clustering methods were 49% and 47%, respectively.\n\nIn ongoing work, we are investigating the possibility training a more general clustering model, whose performance at inference time will be more transferrable across different data types — accurately clustering both faces and animal species, for instance.\n\n**Acknowledgements**: ++[Tianjun Xiao](https://www.amazon.science/author/tianjun-xiao)++,++[ Yongxin Wang](https://www.amazon.science/author/yongxin-wang)++, ++[Yuanjun Xiong](https://www.amazon.science/author/yuanjun-xiong)++, ++[Wei Xia](https://www.amazon.science/author/wei-xia)++, ++[David Wipf](https://www.amazon.science/author/david_wipf)++, ++[Zhang Zheng](https://www.amazon.science/author/zheng-zhang)++, ++[Stefano Soatto](https://www.amazon.science/author/stefano-soatto)++\n\nABOUT THE AUTHOR\n\n#### **[Yifan Xing](https://www.amazon.science/author/yifan-xing)**\n\nYifan Xing is an applied scientist with Amazon Web Services.\n\n#### **[Tong He](https://www.amazon.science/author/tong-he)**\n\nTong He is an applied scientist with Amazon Web Services.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","render":"<p>Most machine learning models use supervised learning, meaning they’re trained on annotated data, which is costly and time consuming to acquire.</p>\n<p>The chief method for doing unsupervised learning, which doesn’t require annotated data, is clustering, or grouping data points together by salient characteristics. The idea is that each cluster represents some category, such as photos of the same person or the same species of animal.</p>\n<p>To decide where to draw boundaries between clusters, clustering algorithms typically rely on heuristics, such as a threshold distance between cluster centers or the shape of the clusters’ distributions. In a <ins><a href=\\"https://www.amazon.science/publications/learning-hierarchical-graph-neural-networks-for-image-clustering\\" target=\\"_blank\\">paper</a></ins> we’re presenting at the International Conference on Computer Vision (<ins><a href=\\"https://www.amazon.science/conferences-and-events/iccv-2021\\" target=\\"_blank\\">ICCV</a></ins>), we propose, instead, to learn from data how to draw boundaries.</p>\n<p>We first represent visual data using a <ins><a href=\\"https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)\\" target=\\"_blank\\">graph</a></ins>, then use a graph neural network (GNN) to produce vector representations of the graph’s nodes. So far, we follow on previous work.</p>\n<p>Instead of relying on heuristics, however, we use labeled data to learn how to cluster the vectors and, crucially, to decide how fine-grained those clusters should be. We call the labeled data meta-training data, since the goal is to learn a general clustering technique, not a specific classification model.</p>\n<p>In particular, we propose a hierarchical GNN, meaning that it creates clusters by adding edges between nodes of a graph, then adds edges between the clusters to create still larger clusters, and so on, iterating until it decides that no more edges should be added.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/5422bc75066b461d845c33fdb49d6e80_image.png\\" alt=\\"image.png\\" /></p>\n<p>A schematic of our graph-based hierarchical clustering approach. The colors of the image borders and of the graph nodes indicate data types (in this case, photos of the same actor). Our approach is hierarchical, iteratively treating small clusters generated at one level as the units of clustering for the next level. We call our base model LANDER, for link approximation and density estimation refinement, and our hierarchical clustering method Hi-LANDER.</p>\n<p>Finally, we apply our hierarchical clustering technique to test sets whose classification categories are disjoint with those of the meta-training data. In our experiments we found that, compared to previous GNN-based supervised and unsupervised approaches, ours increased the F-score — which factors in both false positives and false negatives — by an average of 49% and 47%, respectively.</p>\n<h4><a id=\\"Constructing_the_graph_18\\"></a><strong>Constructing the graph</strong></h4>\\n<p>In our paper, we investigate the case in which we are training a model to cluster visual data that is similar to the meta-training data but has no class overlaps with it. For instance, the meta-training data might be faces of movie stars, while the target application is to cluster faces of politicians, athletes, or other public figures.</p>\n<p>The first step in our process is to use the meta-training data to build a supervised classifier: if the meta-training data is faces of movie stars, the classifier labels input images with names of movie stars.</p>\n<p>The classifier is an encoder-decoder model: the encoder produces a fixed-length vector representation of the input, or feature vector, and the decoder uses that vector to predict a label. Once we’ve trained the classifier, however, we use only the encoder for the rest of the process.</p>\n<p>The feature vectors define points in a multidimensional space. On the basis of the vectors’ locations, we construct a graph, in which each node represents an image, and each image’s k nearest neighbors in the feature space are connected to it (share edges with it) in the graph.</p>\n<p>This graph will serve as the input to the clustering model, which is also an encoder-decoder model. The encoder is a GNN, which produces a vector representation of each node in the graph, based on that node’s feature vector and those of the nodes it’s connected to. Call this vector the node embedding.</p>\n<h4><a id=\\"The_clustering_model_30\\"></a><strong>The clustering model</strong></h4>\\n<p>We adopt a hierarchical approach to clustering. Based on the node embeddings, the clustering model predicts edges between nodes. A cluster is defined as a group of nodes each of which shares an edge with at least one other node in the group and none of which shares an edge with any node outside the group.</p>\n<p>Note that the goal of the clustering model is not just to reproduce the nearest-neighbor graph but to link nodes that represent data of the same type. The nearest-neighbor linkages are useful for predicting clustering linkages, but they are not identical with them.</p>\n<p>After the first pass through the data, we aggregate each cluster into a single, representative “supernode” and repeat the whole process. That is, we create edges between each supernode and its k nearest neighbors, pass the resulting graph through the same GNN, and predict edges based on the supernode embeddings. We repeat this process until the clustering model predicts no edges between nodes.</p>\n<p>We train our clustering model on two different objectives. One is to correctly predict links between nodes, where a correct link is one that picks out two representatives of the same data type in the meta-training data (say, two photos of the same actor).</p>\n<p>We also train the model to correctly predict the density of a given data type in a given graph neighborhood. That is, for each node, the model should predict the proportion of nearby neighbors of the same data type.</p>\n<p>Past research on clustering has shown that factoring in data density improves results. Previously, however, link prediction and data density prediction were handled by separate models. By using a single model to jointly predict both, we significantly increase computational efficiency. We believe that the combination also contributes to our increase in accuracy.</p>\n<p>The other novelty of our approach is that, because of our hierarchical processing scheme, we optimize clustering across the entire input graph. Previous approaches would first divide the graph into subgraphs, then perform inference within subgraphs. This prevents natural parallelization, which is runtime efficient, and limits the effectiveness of information flow through the graph. The full graph-wide processing is another reason for our model’s improved efficiency.</p>\n<p>In experiments, we considered two different sets of meta-training data. One consisted of closeups of human faces, the other of images of particular animal species. We tested the model trained on human faces on two other datasets, whose data categories had zero or very little overlap with those of the meta-training set — 0% and less than 2%. We tested the model trained on animal species on a dataset of previously unseen species. Across both models and the three test sets, our average improvements over previous GNN-based clustering models and unsupervised clustering methods were 49% and 47%, respectively.</p>\n<p>In ongoing work, we are investigating the possibility training a more general clustering model, whose performance at inference time will be more transferrable across different data types — accurately clustering both faces and animal species, for instance.</p>\n<p><strong>Acknowledgements</strong>: <ins><a href=\\"https://www.amazon.science/author/tianjun-xiao\\" target=\\"_blank\\">Tianjun Xiao</a></ins>,<ins><a href=\\"https://www.amazon.science/author/yongxin-wang\\" target=\\"_blank\\"> Yongxin Wang</a></ins>, <ins><a href=\\"https://www.amazon.science/author/yuanjun-xiong\\" target=\\"_blank\\">Yuanjun Xiong</a></ins>, <ins><a href=\\"https://www.amazon.science/author/wei-xia\\" target=\\"_blank\\">Wei Xia</a></ins>, <ins><a href=\\"https://www.amazon.science/author/david_wipf\\" target=\\"_blank\\">David Wipf</a></ins>, <ins><a href=\\"https://www.amazon.science/author/zheng-zhang\\" target=\\"_blank\\">Zhang Zheng</a></ins>, <ins><a href=\\"https://www.amazon.science/author/stefano-soatto\\" target=\\"_blank\\">Stefano Soatto</a></ins></p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Yifan_Xinghttpswwwamazonscienceauthoryifanxing_54\\"></a><strong><a href=\\"https://www.amazon.science/author/yifan-xing\\" target=\\"_blank\\">Yifan Xing</a></strong></h4>\n<p>Yifan Xing is an applied scientist with Amazon Web Services.</p>\n<h4><a id=\\"Tong_Hehttpswwwamazonscienceauthortonghe_58\\"></a><strong><a href=\\"https://www.amazon.science/author/tong-he\\" target=\\"_blank\\">Tong He</a></strong></h4>\n<p>Tong He is an applied scientist with Amazon Web Services.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭