Hierarchical representations improve image retrieval

机器学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Image matching has many practical applications. For instance, image retrieval systems like Amazon’s ++[StyleSnap](https://www.amazon.com/stylesnap)++ or the Amazon Shopping app’s Camera Search let customers upload photos to search for similar images. Image matching usually works by mapping images to a representational space (an embedding space) and finding images whose mappings are nearby.\n\nIn a paper that we presented last week at ++[WACV 2022](https://www.amazon.science/conferences-and-events/wacv-2022)++, we explained how to improve image retrieval accuracy by ++[explicitly modeling object hierarchies](https://www.amazon.science/publications/hierarchical-proxy-based-loss-for-deep-metric-learning)++ when training neural networks to compute image representations.\n\n![image.png](https://dev-media.amazoncloud.cn/4144f5026bff45faa269a352c5256a6b_image.png)\n\nA simplified hierarchy of product images from the Amazon Store.\n\nA shopping site, for instance, might classify a group of products as apparel, a superclass that contains the classes T-shirt and hoodie, which in turn contain instances of specific T-shirts and specific hoodies.\n\nIn our paper, we show how to leverage such hierarchies when building image retrieval systems or, if no hierarchies exist, how to construct them. In experiments that compared our approach to nine predecessors on five different datasets using multiple performance measures, we find that our approach delivers the best results a large majority of the time.\n\n#### **Deep metric learning**\n\nImage matching for image retrieval typically relies on deep metric learning (DML), in which a deep neural network learns not only how to map inputs to an embedding space but also the distance function used to measure proximity in that space.\n\nThere are two dominant loss functions for training DML networks: pairwise and proxy losses. Pairwise losses (e.g., contrastive, triplet) are computed between positive and negative pairs, pulling positive pairs closer, while pushing negative pairs apart. Proxy losses (e.g., proxy-NCA, proxy-anchor) learn a set of embeddings, called proxies, that represent the average locations of members of a class, or the class centroid. The loss for each training sample is computed with respect to the proxies.\n\n![image.png](https://dev-media.amazoncloud.cn/5ceee9e885564f2b90f2069ff327bbde_image.png)\n\nAn example of proxy loss. Circles of the same color represent members of the same class, squares of the same color their centroids. Dotted lines indicate pairs of embeddings that the training process tries to push apart, solid lines pairs of embeddings that training tries to pull together.\n\nPairwise losses need to sample informative pairs/triplets from the training data; this is not needed in proxy losses, removing the complexity of pair sampling and speeding up the training. In particular, the ++[proxy anchor loss](https://openaccess.thecvf.com/content_CVPR_2020/papers/Kim_Proxy_Anchor_Loss_for_Deep_Metric_Learning_CVPR_2020_paper.pdf)++ has been shown to achieve state-of-the-art image retrieval accuracy, while converging much faster than pairwise losses.\n\nOur work proposes a new proxy loss that explicitly uses information about class hierarchies to improve image retrieval accuracy.\n\n#### **Hierarchical data**\n\nWith hierarchical data, there is an opportunity to impose additional constraints on the embedding space via the loss function, so that images in the same superclass are also grouped together, as shown below. This will not only help the model generalize to unseen classes — because it learns the commonalities within superclasses — but it will also lead to more reasonable retrievals when the model makes mistakes.\n\n![image.png](https://dev-media.amazoncloud.cn/5fdaadd6dc0b4d95aee5dae829761d6c_image.png)\n\nFlat versus hierarchical embedding spaces, with six classes (1 – 6) and two superclasses (C1, C2).\n\n#### **Hierarchical proxy loss**\n\nOur hierarchical proxy loss (HPL) is an extension of existing proxy losses. HPL consists of a hierarchy of proxies, and each training image is assigned to a single proxy at every level, as shown in the next figure. Then, the loss is computed as the weighted sum of the proxy losses of all levels.\n\nAt every level, each image is pulled to the assigned proxy and pushed away from all the other proxies. This induces the network to group images hierarchically by learning the commonalities within each group at every level.\n\n![image.png](https://dev-media.amazoncloud.cn/1ce5259df0234687825245b1b92cf8f2_image.png)\n\nHierarchical proxy loss with two levels of proxies. Colored circles are individual data points; colored ovals are hierarchical clusters of data; colored squares are proxies. The dotted lines connecting coarse and fine proxies indicate that the coarse proxies are calculated from the fine. (At the fine-proxy level, some links are omitted for clarity.)\n\nNow, the question is how to build the proxy hierarchy when the data hierarchy is not provided — by, say, an e-commerce catalogue. In such cases, we apply online clustering on lower-level proxies during training to obtain higher-level proxies.\n\nWe begin with a DML model that has been trained to generate proxies at the most fine-grained level. Then we run the following training algorithm:\n\n1. Run a clustering algorithm, e.g., ++[k-means](https://en.wikipedia.org/wiki/K-means_clustering)++, on fine proxies to obtain coarse proxies; assign each sample to one coarse proxy.\n2. Train the network for T iterations, updating both the network and the fine-grained proxies.\n3. After every T iterations, update the assignments of samples to coarse proxies and update the higher-level proxies by averaging the assigned lower-level proxies.\n4. Repeat steps 2–3 until convergence.\n\n#### **Results**\n\n![image.png](https://dev-media.amazoncloud.cn/42f7b370976f46e88663801da4321009_image.png)\n\nComparison of our HPL to proxy-NCA and proxy anchor loss on two of our benchmark datasets.\n\nWe implemented HPL on top of the latest proxy losses, ++[proxy-NCA](https://openaccess.thecvf.com/content_ICCV_2017/papers/Movshovitz-Attias_No_Fuss_Distance_ICCV_2017_paper.pdf)++ and ++[proxy anchor](https://openaccess.thecvf.com/content_CVPR_2020/papers/Kim_Proxy_Anchor_Loss_for_Deep_Metric_Learning_CVPR_2020_paper.pdf)++ loss, the second of which is the state-of-the-art loss in metric learning. We evaluated image retrieval accuracy on five standard metric-learning datasets and found that HPL consistently improved the retrieval accuracy over both proxy-NCA and proxy anchor loss, achieving a new state of the art. A comprehensive experimental evaluation is available in our ++[paper](https://www.amazon.science/publications/hierarchical-proxy-based-loss-for-deep-metric-learning)++.\n\nABOUT THE AUTHOR\n\n#### **[Muhammet Bastan](https://www.amazon.science/author/muhammet-bastan)**\n\nMuhammet Bastan is an applied scientist at Amazon.\n\n","render":"<p>Image matching has many practical applications. For instance, image retrieval systems like Amazon’s <ins><a href=\\"https://www.amazon.com/stylesnap\\" target=\\"_blank\\">StyleSnap</a></ins> or the Amazon Shopping app’s Camera Search let customers upload photos to search for similar images. Image matching usually works by mapping images to a representational space (an embedding space) and finding images whose mappings are nearby.</p>\n<p>In a paper that we presented last week at <ins><a href=\\"https://www.amazon.science/conferences-and-events/wacv-2022\\" target=\\"_blank\\">WACV 2022</a></ins>, we explained how to improve image retrieval accuracy by <ins><a href=\\"https://www.amazon.science/publications/hierarchical-proxy-based-loss-for-deep-metric-learning\\" target=\\"_blank\\">explicitly modeling object hierarchies</a></ins> when training neural networks to compute image representations.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/4144f5026bff45faa269a352c5256a6b_image.png\\" alt=\\"image.png\\" /></p>\n<p>A simplified hierarchy of product images from the Amazon Store.</p>\n<p>A shopping site, for instance, might classify a group of products as apparel, a superclass that contains the classes T-shirt and hoodie, which in turn contain instances of specific T-shirts and specific hoodies.</p>\n<p>In our paper, we show how to leverage such hierarchies when building image retrieval systems or, if no hierarchies exist, how to construct them. In experiments that compared our approach to nine predecessors on five different datasets using multiple performance measures, we find that our approach delivers the best results a large majority of the time.</p>\n<h4><a id=\\"Deep_metric_learning_12\\"></a><strong>Deep metric learning</strong></h4>\\n<p>Image matching for image retrieval typically relies on deep metric learning (DML), in which a deep neural network learns not only how to map inputs to an embedding space but also the distance function used to measure proximity in that space.</p>\n<p>There are two dominant loss functions for training DML networks: pairwise and proxy losses. Pairwise losses (e.g., contrastive, triplet) are computed between positive and negative pairs, pulling positive pairs closer, while pushing negative pairs apart. Proxy losses (e.g., proxy-NCA, proxy-anchor) learn a set of embeddings, called proxies, that represent the average locations of members of a class, or the class centroid. The loss for each training sample is computed with respect to the proxies.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/5ceee9e885564f2b90f2069ff327bbde_image.png\\" alt=\\"image.png\\" /></p>\n<p>An example of proxy loss. Circles of the same color represent members of the same class, squares of the same color their centroids. Dotted lines indicate pairs of embeddings that the training process tries to push apart, solid lines pairs of embeddings that training tries to pull together.</p>\n<p>Pairwise losses need to sample informative pairs/triplets from the training data; this is not needed in proxy losses, removing the complexity of pair sampling and speeding up the training. In particular, the <ins><a href=\\"https://openaccess.thecvf.com/content_CVPR_2020/papers/Kim_Proxy_Anchor_Loss_for_Deep_Metric_Learning_CVPR_2020_paper.pdf\\" target=\\"_blank\\">proxy anchor loss</a></ins> has been shown to achieve state-of-the-art image retrieval accuracy, while converging much faster than pairwise losses.</p>\n<p>Our work proposes a new proxy loss that explicitly uses information about class hierarchies to improve image retrieval accuracy.</p>\n<h4><a id=\\"Hierarchical_data_26\\"></a><strong>Hierarchical data</strong></h4>\\n<p>With hierarchical data, there is an opportunity to impose additional constraints on the embedding space via the loss function, so that images in the same superclass are also grouped together, as shown below. This will not only help the model generalize to unseen classes — because it learns the commonalities within superclasses — but it will also lead to more reasonable retrievals when the model makes mistakes.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/5fdaadd6dc0b4d95aee5dae829761d6c_image.png\\" alt=\\"image.png\\" /></p>\n<p>Flat versus hierarchical embedding spaces, with six classes (1 – 6) and two superclasses (C1, C2).</p>\n<h4><a id=\\"Hierarchical_proxy_loss_34\\"></a><strong>Hierarchical proxy loss</strong></h4>\\n<p>Our hierarchical proxy loss (HPL) is an extension of existing proxy losses. HPL consists of a hierarchy of proxies, and each training image is assigned to a single proxy at every level, as shown in the next figure. Then, the loss is computed as the weighted sum of the proxy losses of all levels.</p>\n<p>At every level, each image is pulled to the assigned proxy and pushed away from all the other proxies. This induces the network to group images hierarchically by learning the commonalities within each group at every level.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/1ce5259df0234687825245b1b92cf8f2_image.png\\" alt=\\"image.png\\" /></p>\n<p>Hierarchical proxy loss with two levels of proxies. Colored circles are individual data points; colored ovals are hierarchical clusters of data; colored squares are proxies. The dotted lines connecting coarse and fine proxies indicate that the coarse proxies are calculated from the fine. (At the fine-proxy level, some links are omitted for clarity.)</p>\n<p>Now, the question is how to build the proxy hierarchy when the data hierarchy is not provided — by, say, an e-commerce catalogue. In such cases, we apply online clustering on lower-level proxies during training to obtain higher-level proxies.</p>\n<p>We begin with a DML model that has been trained to generate proxies at the most fine-grained level. Then we run the following training algorithm:</p>\n<ol>\\n<li>Run a clustering algorithm, e.g., <ins><a href=\\"https://en.wikipedia.org/wiki/K-means_clustering\\" target=\\"_blank\\">k-means</a></ins>, on fine proxies to obtain coarse proxies; assign each sample to one coarse proxy.</li>\n<li>Train the network for T iterations, updating both the network and the fine-grained proxies.</li>\n<li>After every T iterations, update the assignments of samples to coarse proxies and update the higher-level proxies by averaging the assigned lower-level proxies.</li>\n<li>Repeat steps 2–3 until convergence.</li>\n</ol>\\n<h4><a id=\\"Results_53\\"></a><strong>Results</strong></h4>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/42f7b370976f46e88663801da4321009_image.png\\" alt=\\"image.png\\" /></p>\n<p>Comparison of our HPL to proxy-NCA and proxy anchor loss on two of our benchmark datasets.</p>\n<p>We implemented HPL on top of the latest proxy losses, <ins><a href=\\"https://openaccess.thecvf.com/content_ICCV_2017/papers/Movshovitz-Attias_No_Fuss_Distance_ICCV_2017_paper.pdf\\" target=\\"_blank\\">proxy-NCA</a></ins> and <ins><a href=\\"https://openaccess.thecvf.com/content_CVPR_2020/papers/Kim_Proxy_Anchor_Loss_for_Deep_Metric_Learning_CVPR_2020_paper.pdf\\" target=\\"_blank\\">proxy anchor</a></ins> loss, the second of which is the state-of-the-art loss in metric learning. We evaluated image retrieval accuracy on five standard metric-learning datasets and found that HPL consistently improved the retrieval accuracy over both proxy-NCA and proxy anchor loss, achieving a new state of the art. A comprehensive experimental evaluation is available in our <ins><a href=\\"https://www.amazon.science/publications/hierarchical-proxy-based-loss-for-deep-metric-learning\\" target=\\"_blank\\">paper</a></ins>.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Muhammet_Bastanhttpswwwamazonscienceauthormuhammetbastan_63\\"></a><strong><a href=\\"https://www.amazon.science/author/muhammet-bastan\\" target=\\"_blank\\">Muhammet Bastan</a></strong></h4>\n<p>Muhammet Bastan is an applied scientist at Amazon.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭