Searching video using natural-language descriptions

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"In an ideal world, finding a particular section of a video would be as simple as describing it in natural language — saying, for instance, “the person pours ingredients into a mixer”.\n\nAt this year’s meeting of the ACM Special Interest Group on Information Retrieval ([SIGIR](https://www.amazon.science/conferences-and-events/sigir-2021)), my colleagues and I are [presenting a new method](https://www.amazon.science/publications/cross-interaction-network-for-natural-language-guided-video-moment-retrieval) for doing such natural-language-guided video moment retrieval (VMR).\n\nOur method dispenses with the complex iterated message-passing procedure adopted by some of its predecessors, so it reduces training time; in one of our experiments, training our model took one-third as long as training the prior state-of-the-art model on the same data and the same hardware. At the same time, our model outperforms its predecessors, with relative improvement of up to 11% on the relevant metrics and datasets.\n\nOur model has two chief novelties:\n\n- Early fusion/cross-attention: Some prior models use “late fusion”, meaning that the video segments and the query are embedded in a representational space independently, and then the model selects the video segment nearest the query according to some metric distance. We instead use an early-fusion approach, in which the embeddings for the query and the video segments are determined in a cross-coordinated way. And where some prior early-fusion methods used iterated message passing to do cross-coordination, we use a much simpler cross-attention mechanism. \n- Multitask training: We train our model on two tasks simultaneously. One is the identification of the start and stop points of a video sequence; the other is the binary classification of each of the frames between those points as part of the sequence or not. Annotator disagreement, meaning discrepancies in the start and stop times identified in the training data, can reduce model accuracy; the binary-classification task leverages the continuity in the annotation of the segment frames, which corrects imbalances in the training data.\n\n#### **Cross-attention**\n\nIn the past, natural-language VMR models have represented both query texts and sequences of video frames as [graphs](https://en.wikipedia.org/wiki/Graph_theory). The models work out correspondences between the words of a text and the frames of a sequence through a message-passing scheme, in which each node of the text graph sends messages to multiple nodes of the video graph, and vice versa. The model refines its embeddings of the query and frames based on correspondences that emerge over several rounds of message passing.\n\nIn our model, by contrast, we first encode both the query and a candidate video segment, then use a cross-interaction multi-head attention mechanism to identify which features of the query encoding are most relevant to the video encoding, and vice versa. \n\n![下载.gif](https://dev-media.amazoncloud.cn/5807fe62b8f14e5ba78da12c4c3ad84b_%E4%B8%8B%E8%BD%BD.gif)\n\nThe researchers’ model features two innovations: (1) an “early-fusion” cross-attention mechanism that conditions the model’s representation of the text query on the video sequence, and vice versa, and (2) simultaneous training on two tasks, the estimation of start and stop points and a binary classification of video frames as belonging to the target sequence or not.\n\nOn the basis of that cross-interaction, the model outputs a video embedding that factors in aspects of the query and a query embedding that factors in aspects of the video. Those embeddings are concatenated to produce a single fused embedding, which passes to two separate classifiers. One classifier identifies start/stop points, and the other classifies video frames as part of the relevant segment or not.\n\nTo test our approach, we used two benchmark datasets, both of which contain videos some of whose frames have been annotated with descriptive texts. We compared our method to five prior models, three of which have achieved state-of-the-art results.\n\nWe evaluated the models’ performance using intersection over union (IoU), a ratio of the number of correctly labeled video segment frames to the total number of frames labeled as belonging to the segment either by the model or in the dataset. A correct retrieval was defined as one that met some threshold IoU. We experimented with three thresholds, 0.3, 0.5, and 0.7.\n\n![image.png](https://dev-media.amazoncloud.cn/abc8ecc0918a4c1584c53d3d4d96fd4b_image.png)\n\nExamples of the performance of the researchers' model on data from the test set.\n\nAcross six experiments — two datasets and three IoU thresholds — our approach outperformed all of its predecessors five times. In the sixth case, one prior model had a slight edge (a 1% relative improvement). But that same model fell 37% short of ours on the experiment in which our model showed the biggest gains.","render":"<p>In an ideal world, finding a particular section of a video would be as simple as describing it in natural language — saying, for instance, “the person pours ingredients into a mixer”.</p>\n<p>At this year’s meeting of the ACM Special Interest Group on Information Retrieval (<a href=\\"https://www.amazon.science/conferences-and-events/sigir-2021\\" target=\\"_blank\\">SIGIR</a>), my colleagues and I are <a href=\\"https://www.amazon.science/publications/cross-interaction-network-for-natural-language-guided-video-moment-retrieval\\" target=\\"_blank\\">presenting a new method</a> for doing such natural-language-guided video moment retrieval (VMR).</p>\\n<p>Our method dispenses with the complex iterated message-passing procedure adopted by some of its predecessors, so it reduces training time; in one of our experiments, training our model took one-third as long as training the prior state-of-the-art model on the same data and the same hardware. At the same time, our model outperforms its predecessors, with relative improvement of up to 11% on the relevant metrics and datasets.</p>\n<p>Our model has two chief novelties:</p>\n<ul>\\n<li>Early fusion/cross-attention: Some prior models use “late fusion”, meaning that the video segments and the query are embedded in a representational space independently, and then the model selects the video segment nearest the query according to some metric distance. We instead use an early-fusion approach, in which the embeddings for the query and the video segments are determined in a cross-coordinated way. And where some prior early-fusion methods used iterated message passing to do cross-coordination, we use a much simpler cross-attention mechanism.</li>\n<li>Multitask training: We train our model on two tasks simultaneously. One is the identification of the start and stop points of a video sequence; the other is the binary classification of each of the frames between those points as part of the sequence or not. Annotator disagreement, meaning discrepancies in the start and stop times identified in the training data, can reduce model accuracy; the binary-classification task leverages the continuity in the annotation of the segment frames, which corrects imbalances in the training data.</li>\n</ul>\\n<h4><a id=\\"Crossattention_11\\"></a><strong>Cross-attention</strong></h4>\\n<p>In the past, natural-language VMR models have represented both query texts and sequences of video frames as <a href=\\"https://en.wikipedia.org/wiki/Graph_theory\\" target=\\"_blank\\">graphs</a>. The models work out correspondences between the words of a text and the frames of a sequence through a message-passing scheme, in which each node of the text graph sends messages to multiple nodes of the video graph, and vice versa. The model refines its embeddings of the query and frames based on correspondences that emerge over several rounds of message passing.</p>\\n<p>In our model, by contrast, we first encode both the query and a candidate video segment, then use a cross-interaction multi-head attention mechanism to identify which features of the query encoding are most relevant to the video encoding, and vice versa.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/5807fe62b8f14e5ba78da12c4c3ad84b_%E4%B8%8B%E8%BD%BD.gif\\" alt=\\"下载.gif\\" /></p>\n<p>The researchers’ model features two innovations: (1) an “early-fusion” cross-attention mechanism that conditions the model’s representation of the text query on the video sequence, and vice versa, and (2) simultaneous training on two tasks, the estimation of start and stop points and a binary classification of video frames as belonging to the target sequence or not.</p>\n<p>On the basis of that cross-interaction, the model outputs a video embedding that factors in aspects of the query and a query embedding that factors in aspects of the video. Those embeddings are concatenated to produce a single fused embedding, which passes to two separate classifiers. One classifier identifies start/stop points, and the other classifies video frames as part of the relevant segment or not.</p>\n<p>To test our approach, we used two benchmark datasets, both of which contain videos some of whose frames have been annotated with descriptive texts. We compared our method to five prior models, three of which have achieved state-of-the-art results.</p>\n<p>We evaluated the models’ performance using intersection over union (IoU), a ratio of the number of correctly labeled video segment frames to the total number of frames labeled as belonging to the segment either by the model or in the dataset. A correct retrieval was defined as one that met some threshold IoU. We experimented with three thresholds, 0.3, 0.5, and 0.7.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/abc8ecc0918a4c1584c53d3d4d96fd4b_image.png\\" alt=\\"image.png\\" /></p>\n<p>Examples of the performance of the researchers’ model on data from the test set.</p>\n<p>Across six experiments — two datasets and three IoU thresholds — our approach outperformed all of its predecessors five times. In the sixth case, one prior model had a slight edge (a 1% relative improvement). But that same model fell 37% short of ours on the experiment in which our model showed the biggest gains.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭