Better joint representations of image and text

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"This year, the Amazon Search team had two papers accepted at the Conference on Computer Vision and Pattern Recognition (++[CVPR](https://www.amazon.science/conferences-and-events/cvpr-2022)++), both focusing on image-text feature alignment, or training a neural network to produce joint representations of images and their associated texts. Such representations are useful for a range of computer vision tasks, including text-based image search and image-based text search.\n\nTypically, joint image-text models are trained using contrastive learning, in which the model is fed training samples in pairs, one positive and one negative, and it learns to pull the positive examples together in the representation space and push the positive and negative examples apart. So, for example, a model might be trained on images with pairs of associated text labels, one the correct label and one a random label, and it would learn to associate images with the correct labels in a shared, multimodal representative space.\n\nBoth of our CVPR papers grow out of the same observation: that simply enforcing the alignment between different modalities with strong contrastive learning may cause the degeneration of the learned features. To address this problem, our papers explore different ways of imposing additional structure on the representational space, so that the model learns more robust image-text alignments.\n\n#### **Representation codebooks**\n\nA neural network trained on multimodal data will naturally tend to cluster data of the same type together in the representational space, a tendency at odds with the purpose of image-text representation learning, which seeks to cluster images with their associated texts.\n\nTo combat this tendency, in “++[Multi-modal alignment using representation codebook](https://www.amazon.science/publications/multi-modal-alignment-using-representation-codebook)++”, we propose to align image and text at a higher and more stable level using cluster representation. Specifically, we treat image and text as two “views” of the same entity and use a codebook of cluster centers to span the joint vision-language coding space. That is, each center anchors a cluster of related concepts, whether those concepts are expressed visually or textually.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/8b09a9a031874a6497dc1f2a7ff5b775_%E4%B8%8B%E8%BD%BD.jpg)\n\nA new approach to image-text alignment treats image and text as two “views” of the same entity and uses a codebook of cluster centers to span the joint vision-language coding space.\n\nDuring training, we contrast positive and negative samples via their cluster assignments while simultaneously optimizing the cluster centers. To further smooth out the learning process, we adopt a teacher-student distillation paradigm, in which the output of a teacher model provides training targets for a student model. Specifically, we use the model output for one view — image or text — to guide the student learning of the other. We evaluated our approach on common vision-language benchmarks and obtain new state of the art on zero-shot cross-modality retrieval, or image-based text retrieval and text-based image retrieval on data types unseen during training. Our model is also competitive on various other transfer tasks.\n\n![下载 1.jpg](https://dev-media.amazoncloud.cn/1b5a078b6b7347e5a64a59aed0440cc7_%E4%B8%8B%E8%BD%BD%20%281%29.jpg)\n\nAn overview of the representative-codebook approach. For simplicity, only one student-teacher pair is shown (teacher for the image, student for the text).\n\n#### **Triple contrastive learning**\n\nThe success of contrastive learning in training image-text alignment models has been attributed to its ability to maximize the mutual information between image and text pairs, or the extent to which image features can be predicted from text features and vice versa.\n\nHowever, simply performing cross-modal alignment (CMA) ignores potentially useful correlations within each modality. For instance, although CMA maps image-text pairs close together in the embedding space, it fails to ensure that similar inputs from the same modality stay close to each other. This problem can get even worse when the pretraining data is noisy.\n\nIn “++[Vision-language pre-training with triple contrastive learning](https://www.amazon.science/publications/vision-language-pre-training-with-triple-contrastive-learning)++”, we address this problem using triple contrastive learning (TCL) for vision-language pre-training. This approach leverages both cross-modal and intra-modal self-supervision, or training on tasks contrived so that they don’t require labeled training examples. Besides CMA, TCL introduces an intramodal contrastive objective to provide complementary benefits in representation learning.\n\n![下载 2.jpg](https://dev-media.amazoncloud.cn/37e57b5171e44325aa56400953b24a0e_%E4%B8%8B%E8%BD%BD%20%282%29.jpg)\n\nAt left (A) is the architecture of the model, with three loss functions (LMI, IMC, and CMA). At right (B) is an illustration of the motivation for triple contrastive loss. The addition of the intramodal contrastive (IMC) loss enables the model to learn a more reasonable embedding (blue square).\n\nTo take advantage of localized and structural information from image and text input, TCL further maximizes the average mutual information between local regions of image/text and their global summary. To the best of our knowledge, ours is the first work that takes into account local structure information for multimodal representation learning. Experimental evaluations show that our approach is competitive and achieves the new state of the art on various common downstream vision-language tasks, such as image-text retrieval and visual question answering.\n\nABOUT THE AUTHOR\n\n#### **[Liqun Chen](https://www.amazon.science/author/liqun-chen)**\n\nLiqun Chen is an applied scientist in the Amazon Search group.","render":"<p>This year, the Amazon Search team had two papers accepted at the Conference on Computer Vision and Pattern Recognition (<ins><a href=\\"https://www.amazon.science/conferences-and-events/cvpr-2022\\" target=\\"_blank\\">CVPR</a></ins>), both focusing on image-text feature alignment, or training a neural network to produce joint representations of images and their associated texts. Such representations are useful for a range of computer vision tasks, including text-based image search and image-based text search.</p>\n<p>Typically, joint image-text models are trained using contrastive learning, in which the model is fed training samples in pairs, one positive and one negative, and it learns to pull the positive examples together in the representation space and push the positive and negative examples apart. So, for example, a model might be trained on images with pairs of associated text labels, one the correct label and one a random label, and it would learn to associate images with the correct labels in a shared, multimodal representative space.</p>\n<p>Both of our CVPR papers grow out of the same observation: that simply enforcing the alignment between different modalities with strong contrastive learning may cause the degeneration of the learned features. To address this problem, our papers explore different ways of imposing additional structure on the representational space, so that the model learns more robust image-text alignments.</p>\n<h4><a id=\\"Representation_codebooks_6\\"></a><strong>Representation codebooks</strong></h4>\\n<p>A neural network trained on multimodal data will naturally tend to cluster data of the same type together in the representational space, a tendency at odds with the purpose of image-text representation learning, which seeks to cluster images with their associated texts.</p>\n<p>To combat this tendency, in “<ins><a href=\\"https://www.amazon.science/publications/multi-modal-alignment-using-representation-codebook\\" target=\\"_blank\\">Multi-modal alignment using representation codebook</a></ins>”, we propose to align image and text at a higher and more stable level using cluster representation. Specifically, we treat image and text as two “views” of the same entity and use a codebook of cluster centers to span the joint vision-language coding space. That is, each center anchors a cluster of related concepts, whether those concepts are expressed visually or textually.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/8b09a9a031874a6497dc1f2a7ff5b775_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>A new approach to image-text alignment treats image and text as two “views” of the same entity and uses a codebook of cluster centers to span the joint vision-language coding space.</p>\n<p>During training, we contrast positive and negative samples via their cluster assignments while simultaneously optimizing the cluster centers. To further smooth out the learning process, we adopt a teacher-student distillation paradigm, in which the output of a teacher model provides training targets for a student model. Specifically, we use the model output for one view — image or text — to guide the student learning of the other. We evaluated our approach on common vision-language benchmarks and obtain new state of the art on zero-shot cross-modality retrieval, or image-based text retrieval and text-based image retrieval on data types unseen during training. Our model is also competitive on various other transfer tasks.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/1b5a078b6b7347e5a64a59aed0440cc7_%E4%B8%8B%E8%BD%BD%20%281%29.jpg\\" alt=\\"下载 1.jpg\\" /></p>\n<p>An overview of the representative-codebook approach. For simplicity, only one student-teacher pair is shown (teacher for the image, student for the text).</p>\n<h4><a id=\\"Triple_contrastive_learning_22\\"></a><strong>Triple contrastive learning</strong></h4>\\n<p>The success of contrastive learning in training image-text alignment models has been attributed to its ability to maximize the mutual information between image and text pairs, or the extent to which image features can be predicted from text features and vice versa.</p>\n<p>However, simply performing cross-modal alignment (CMA) ignores potentially useful correlations within each modality. For instance, although CMA maps image-text pairs close together in the embedding space, it fails to ensure that similar inputs from the same modality stay close to each other. This problem can get even worse when the pretraining data is noisy.</p>\n<p>In “<ins><a href=\\"https://www.amazon.science/publications/vision-language-pre-training-with-triple-contrastive-learning\\" target=\\"_blank\\">Vision-language pre-training with triple contrastive learning</a></ins>”, we address this problem using triple contrastive learning (TCL) for vision-language pre-training. This approach leverages both cross-modal and intra-modal self-supervision, or training on tasks contrived so that they don’t require labeled training examples. Besides CMA, TCL introduces an intramodal contrastive objective to provide complementary benefits in representation learning.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/37e57b5171e44325aa56400953b24a0e_%E4%B8%8B%E8%BD%BD%20%282%29.jpg\\" alt=\\"下载 2.jpg\\" /></p>\n<p>At left (A) is the architecture of the model, with three loss functions (LMI, IMC, and CMA). At right (B) is an illustration of the motivation for triple contrastive loss. The addition of the intramodal contrastive (IMC) loss enables the model to learn a more reasonable embedding (blue square).</p>\n<p>To take advantage of localized and structural information from image and text input, TCL further maximizes the average mutual information between local regions of image/text and their global summary. To the best of our knowledge, ours is the first work that takes into account local structure information for multimodal representation learning. Experimental evaluations show that our approach is competitive and achieves the new state of the art on various common downstream vision-language tasks, such as image-text retrieval and visual question answering.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Liqun_Chenhttpswwwamazonscienceauthorliqunchen_38\\"></a><strong><a href=\\"https://www.amazon.science/author/liqun-chen\\" target=\\"_blank\\">Liqun Chen</a></strong></h4>\n<p>Liqun Chen is an applied scientist in the Amazon Search group.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭